Elsewhere: Atheism and objective rights

Back in August, Clark at Popehat did a slightly confusing posting on how some atheists are confused about rights because they speak as if rights exist while also saying that nothing but matter exists. Clark seems to be one of those theists who thinks that gods are be required to exist for objective rights to exist, but he doesn’t really say why he thinks that. (The real trick in all these arguments is specifying quite what you mean by “objective”. I enjoyed John D’s quote from Richard Joyce: “So many debates in philosophy revolve around the issue of objectivity versus subjectivity that one may be forgiven for assuming that someone somewhere understands this distinction.”)

I argued that Clark had got materialism wrong. Someone asked how any atheist can avoid the conclusions of Alex Rosenberg. I slightly facetiously replied “by not being an eliminative materialist”, but I can do better than that, I think. Rosenberg gets a lot of counterargument from people who are avowed naturalists and philosophically respectable. It doesn’t seem unreasonable for an atheist, especially one who isn’t an expert on philosophy, not to share Rosenberg’s conclusions.

Typically, Christian apologists ignore any distinction between varieties of naturalistic worldview (see Luke M’s interview with John Shook) and go with something like “if atheism is true, we’re nothing but matter in motion, chemical fizzes like soda spilled on the ground”. They then make an argument which uses the fallacy of composition to “show” that properties which matter and energy don’t have can’t be real on atheism (by which they mean some kind of materialism). This is all bunk, but pretty popular bunk, at least in the blogosphere, if not in philosophy journals.

Finally, I got into Yudkowsky’s belief in moral absolutes, which is interesting as Yudkowsky’s an atheist. Massimo P had a post about that back in January, where he sort of disagreed with Yudkowsky but then actually seemed to agree with him if you stripped away the layers of words a bit. My most significant comment on that is here. Yudkowsky’s transition from what looks like mathematical Platonism to the claim that morality is absolute deserves a post of its own, which I might get around to at some point. There’s a lesson for atheists, though: atheist appeals to evolution as a moral justifier are confused. Evolution might be a (partial) answer to “why do I care about X?” but not “why should I care about X?”

11 Comments on "Elsewhere: Atheism and objective rights"


  1. I’d call myself a moral nihilist, in that I don’t think there’s any moral dimension to the universe outside of human experience. As Tacetus put it, “Nothing exists except atoms and the void”, or as Sarah Silverman put it in her Twitter bio, “We’re all just molecules, cutie.”

    I’m also a chess nihilist: I also don’t believe the rules of chess are ingrained into the universe or dictated by God. That doesn’t mean there’s no reason to abide by the rules of chess *if you want to play chess*. Or that I wouldn’t seek some kind of sanction against somebody who challenged me to a game of chess and then tried to get away with moving their prawn diagonally or whatever. As a chess nihilist I don’t concern myself with the petty details of chess.

    I couldn’t post this comment if there wasn’t a general consensus to obey the HTTP specification among the right people. That doesn’t mean the HTTP specification is objectively real (or false), but it is objectively useful. If somebody tried to push a different convention, then I’d suspect they were probably doing it out of self-interest and I’d oppose it unless I was shown a good reason (probably aligned with my self-interest) to do otherwise. If somebody’s pushing the idea that universal human rights are a bad thing, then I’ll oppose it on the same grounds.

    tl;dr We have a *convention* on human rights because it’s just a convention, but conventions are useful.

    Reply

    1. It’s objectively true that the rules of chess imply that one cannot move a prawn diagonally. If you play a game in which you do that, it’s un-chess-like, even if you happen to use the word “chess” to describe what you’re doing, because it’s clear that you’re using the word to refer to a different game or don’t understand the rules.

      Yudkowsky’s trick is to run with this: if you’re eating babies, it’s un-moral-like. That’s one of the many things that moral means. In the Highly Advanced Epistemology 101 for Beginners series, there’s a conceptual link between the logical pinpointing stuff about the natural numbers and the post which finally made me get why Yudkowsky calls himself a moral realist, By Which It May Be Judged. If we can have the sort of dialogue that occurs in the logical pinpointing post but applied to morality rather than numbers, we end up pinpointing this (vastly complicated) concept of morality, so why isn’t that concept as real as the natural numbers? Theoretically, we could build a machine which can make moral judgements (which is what he wants to do, of course), so “morality” exists as much as any other algorithm does. Once you understand the algorithm that “morality” labels (just as you understand the game “chess”), it’s objectively true that eating babies is not moral.

      This does have the feeling of a trick about it. Partly that’s because what a lot of people want out of “objective morality” is not just that it’s a label, but that it’s somehow motivating to any conceivable mind (but Y thinks that’s impossible).

      Partly that’s because this particular concept seems somehow arbitrary: there’s a vast space of possible algorithms, and another algorithm would have you convert all matter into paperclips, or eat babies, or anything else you can imagine, when you wire it up to your action system, so what’s special about the one we call “morality”? Y seems to say that any justification for choosing morality over paperclips will be in terms of concepts by which morality is judged as better than paperclips, hence the “logic all the way down” stuff which Massimo objected to so strenuously.

      It’s quite a neat view when you get it, but that’s not to say there aren’t problems with it, notably what we should think is going on when people have moral disagreements. Y is careful to discourage the use of the phrase “clippy-better” (for things that Clippy finds motivating) because clippy-better is nothing like better any more then chess is like football, but when humans have moral disagreements sometimes it’s clear that they’re shooting for the same sort of thing but have more subtle differences, so it’s not clear that “morality” refers to only one concept.

      Reply

      1. Yeah, he’s running into the fact that therefore he means that morality is _axioms_. And it so happens that because we evolved together, and have similar societies, humans tend to have similar-ish moralities a lot of the time.

        But that doesn’t make them objective, it makes them entirely subjective – just similar.

        Reply

        1. Would you say arithmetic was subjective, though? I think most people like to point to maths as the exemplar of objectivity, hence Y’s trick of turning morality into logic.

          I think there might be a difference between the natural numbers and morality to tease out, but it’s tricky.

          The natural numbers model the behaviour of some objects in the universe and hence we’ve acquired the concept of them (and elaborated it to numbers for which there are no counterparts in counted objects). If this weren’t the case, natural numbers would just be some obscure mathematical curiosity. But it is the case, so natural numbers seem linked to the world of apples and oranges and so on.

          Morality doesn’t seem to be about modelling the universe in the same way, but I’m having trouble in stating the difference exactly. It does classify actions or states of affairs, I suppose, but then arguably so does arithmetic: what’s the difference between “if I ate this baby, the world would be worse” and “if I have 5 apples and take away 2, there will be 3 left”?

          Reply

          1. It’s a system. There are _many_ theoretical arithmetics.

            Only one of them is used commonly, but that’s because it’s the one that produces models with predictive power for us. If there was a single morality that was better than all of the others for predicting something about the world then it would also be more real.

            What’s the difference between “if I ate this baby, the world would be worse” and “if I have 5 apples and take away 2, there will be 3 left”?

            That “Worse” is a value judgement, and thus intrinsically subjective!

            Reply

            1. That “Worse” is a value judgement, and thus intrinsically subjective!

              You’re restating that there’s a difference between the two statements by putting one into a category which the other is not in. But what makes something a “value judgement”?

              It can’t really be whether the judgement takes place in a human brain: both statements seem to follow from adopting concepts that humans find useful but which, for example, aliens with different brains and sensory systems may not. In both cases, if someone understood the concept deeply enough, they might explain the concept to an alien and it might then be able to comment on numerical or moral problems (though it might not be motivated to act).

              I think there might be something about complexity here: we can build machines to count apples so we have a conscious understanding of the process, and it’s so simple that we can explain it to other humans and they can count their apples or sheep or whatever. But morality is vastly more complex and so it doesn’t feel like we really know what we’re doing when we make moral judgements. All the simple restatements of what morality is fail in some way (see Socrates Jones: Pro Philosopher).

              Still, as Maitzen says, saying “morality is subjective” can be taken as “morality is a matter of individual taste”, and I don’t think I agree with that. It is certainly a human concern but it’s more like money than my favourite ice cream flavour, I think: I don’t get to decide that I’d prefer my bank balance to be a certain amount so therefore it is that amount, but money is a human construction. The posh philosophy word for this is “intersubjective”, I think.

              Reply

              1. A value judgement is one that is dependent on the values of the observer, rather than one that is dependent solely on attributes of the thing being observed.

                So an object is “red” (because it reflects earth-type sunlight in certain frequencies and not in others), but is “pretty” because the observer likes the way that colour looks.

                In this case “worse” is a statement about the preferences of the observer – to living people over dead ones, worlds with flowers versus ones made of paperclips.

                Subjectivity can be shared (and I agree that intersubjectivity is key) – but that doesn’t make it objective. It’s objective if it’s part of the actual fabric of the thing, not part of the perceptions of it. That’s what subjective/objective _means_ – that something is either to do with our experience of an object, or to do with the object itself.

                The murder of a billion children is only good/bad because it has been judged so. A particular judgement may be very common amongst people, but that doesn’t prevent it being an attribute of the people, rather than of the children.

                Reply

          2. One possibly relevant difference between the natural numbers and human morality is that the natural numbers are, well, *natural* in the sense that their axioms are really simple and they (or something awfully like them) are pretty much bound to come up in any attempt to think really clearly about almost anything. Whereas most versions of human morality are full of kinda-arbitrary complexities (this is of course one of Yudkowsky’s favourite points) and a person, or a species, could get by just fine with any number of different and incompatible sets of values.

            Reply

    2. The problem is you have nothing to say to people who move a pawn 6 spaces and declare that they’ve captured your king, responding to your protests that they’re not actually playing chess by saying, “Who cares? I don’t want to play chess.”

      Reply

      1. Sure, but it seems it’s always open to someone to say “who cares whether I’m maximising utility/following my created purpose/obeying the categorical imperative/insert your favourite meta-ethic here?” To have something to say to them seems to require a universally compelling argument, and Y quite persuasively claims there aren’t any. Y is perfectly happy to say that people who don’t care about morality are morally wrong, of course. 🙂

        Reply

  2. I’d love to hear a decent explanation of the source of morality in a non-supernatural world. If you could point me at one (that’s not tens of thousands of words long) that would be great.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *