Morality again

Here’s what I currently think about morality. Perhaps the pros in the audience can tell me whether this has a name.

It is very hard to come up with a general theory about what makes something right which isn’t open to objections based on hard cases where the theory contradicts our intuitions (consider the dust speck problem if you’re a utilitarian; or the thought that God might command us to eat babies, if you’re a believer in Divine Command Theory). Russell Blackford says:

I’ve seen many attempts to get around the problem with morality – that it cannot possibly be everything that it is widely thought to be (action guiding, intrinsically prescriptive, objectively correct, etc.). In my experience, this is like trying to get rid of the bump in the carpet. The bump always pops up somewhere else. We have to live with it.

Call something an “intuition” if it’s something we just seem to feel is true. Perhaps there’s no empirical evidence for it, perhaps it even doesn’t seem to be the sort of thing there could be empirical evidence for. There’s a question of whether intuitions are reliable, but since one of the things we want from a theory is that it seems compelling, and the abovementioned problems with the conflict of theory and intuitions typically result in us not finding the theory compelling, a successful theory seems to involve satisfying our intuitions (or at least, our meta-intuitions, the means by which we can change some of our intuitions, since there are convinced utilitarians who really believe in torture over dust specks, Divine Command Theorists who believe God can justly command genocide, and so on).

In the case of free standing feelings that we ought to do something regardless of other benefits to us (assuming that such feelings exist and aren’t always just concealed desires for our own benefit, for example, the pleasure we get by doing good), it seems that our upbringing or genes have gifted us with these goals (even in the case of the pleasure, something has arranged it so that doing good feels pleasurable to us).

Assuming that we could work out any particular person’s process for deciding whether something is right, we could possibly present it to them and say “there you go, that’s morality, at least as far as you’re concerned”. There’s the amusing possibility that they’d disagree, I suppose, since I think a lot of the process isn’t consciously available to us. I think the carpet bump occurs at least in part because the typical human process is a lot more complex than any of the grand philosophical theories make it out to be.

We might also encounter people who disagree with us but are persuadable based on intuitions common to most humans, humans who aren’t persuadable, human sociopaths, or (theoretically) paperclip maximizers. In the cases where someone else’s morality is so alien that we cannot persuade them that it’d be bad to kill people for fun or turn the Earth and all that it contains into a collection of paperclips, we can still think they ought not to do that, but it doesn’t really do us much good unless we can enforce it somehow. I see no reason to suppose there’s a universal solvent, a way of persuading any rational mind that it ought to do something independent of threats of enforcement.

And that’s about it: when I say you ought or ought not to do something, I’m appealing to intuitions I hope we have in common, or possibly making a veiled threat or promise of reward. This works because it turns out that many humans do have a lot in common, especially if we were raised in similar cultures. But there’s no reason to suppose there’s more to it than that, moral laws floating around in a Platonic space or being grounded by God, or similar.

I don’t find that this gives me much trouble in using moral language like “right” and “wrong”, though to the extent that other people use those terms while thinking that there are rights and wrongs floating around in Platonic space which would compel any reasoning mind, I’m kind of an error theorist, I suppose, but I don’t suppose that everyone does do that.

38 Comments on "Morality again"


  1. Are you saying you *don’t* accept that the torture option is a better option than the dust specks option? There’s a fairly compelling argument that way that starts: it’s better that one person get two dust specks than that a billion get one. It’s better that one person get three dust specks than that a billion get two.

    Have you read Joshua Greene’s “The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It”?

    http://lesswrong.com/lw/10f/the_terrible_horrible_no_good_very_bad_truth/

    Reply

        1. I don’t really have a point :->

          It was more that the discussion reminds me of the arguments around utilitarianism, and whether one can balance dust against torture.

          Reply

    1. Are you saying you *don’t* accept that the torture option is a better option than the dust specks option?

      Currently I don’t: it doesn’t seem that any amount of dust specks could be commensurate with torture.

      it’s better that one person get two dust specks than that a billion get one. It’s better that one person get three dust specks than that a billion get two.

      I’m not sure how this scales up: assuming that torture is equivalent to a whole lot of dust specs, don’t you end up with “It’s better that one person gets a slightly worse torture than that the torture a billion people get”?

      Reply

      1. Yes, that’s right. We first progress to these: it’s better that one person get a slight graze on their forearm than that a billion people get a hundred dust specks each. It’s better that one person get a small cut….

        and go through something like this:

        it’s better that one person get 1.1 seconds of torture than that a billion people get one second.

        And at the end, I hope you’ll agree that it’s better that one person get 50 years of torture than that a billion people get 50 years minus 0.1 seconds.

        Given the unbelievably vast numbers I have to play with, I can very easily make the differences in steps much smaller, and the numbers of people weighed on either side much larger.

        From there, I think you have to concede that torture is the better option than dust specks on pain of circular preferences.

        Reply

        1. I think if one doesn’t think that torture and dust specks are commensurable, then one can agree with the slide to large numbers of dust specs but not thence to torture, and one can agree with statements about torture or about dust specs but not those which mix the two.

          It’s an interesting question where the boundaries lie at which I’d say “no amount of trivial inconvenience is better than one person experiencing serious pain”. But a straightforward modelling of pain as a real number which can be added between persons, and defining “better” as “that which minimises the total pain”, doesn’t seem to model my moral circuitry very well.

          It’s possible this means I’m irrational, but as simont points out, it’s not clear from reading the wiki page that the u(outcome) function has to rate the disutility of pain in such a way that it there’s a linear scale which can be added between people: in fact, the example of the utility of money seems to be an example of a quantity which we think of as real valued and add-able between people having different utilities to different people.

          Reply

          1. OK so there are two qualitatively different kinds of bad thing. So my next move is to figure out the very worst thing that’s like a dust speck – perhaps a dust speck every second for 50 years – and the very mildest thing that counts as torture – a playground Chinese Burn maybe – and ask whether it’s better that a billion people get a dust speck a second for 50 years, or one person get a Chinese Burn.

            Reply

            1. I can’t speak for paul, but fwiw, I would (currently) refer the dust-specks to the torture, but I would also admit that I find the argument proposes for the dust-specks basically plausible, so I must have a hole in my ethical system somewhere, but it’s not clear if it’s in my reasons for preferring the dust-specks, or in the links in the chain of the argument for preferring the torture. And I would like to iron out inconsistencies, but also, I think my moral system will always be an awkward conflation of feelings and heuristics that will always have inconsistencies in edge cases, and it’s better to say I’m fairly sure of area A and area B, even if I’m not sure where the boundary between them is, and work on achieving morality there, than to give up in despair because the system isn’t perfect.

              But I’m not sure.

              Reply

              1. The inconsistency in your moral values here is called “scope insensitivity”. It happens because your intuitions can’t aggregate suffering up to anything close to such large numbers, so you vastly underestimate the total misery of a googleplex dust specks. The correct fix is a process Eliezer Yudkowsky calls shut up and multiply.

                Reply

                1. Hm. Thank you.

                  (I deleted a rather longer reply because I think it wandered more than was meaningful.)

                  I admit, I obviously do suffer from scope insensitivity. For instance, I can often tell because a minor rephrasing of the problem gives me a completely different emotional response. But here, I find strong intuitions that don’t go away when I rephrase the problem,

                  So although I feel like I believe three competing memes:

                  1. a dust-speck is so inconsequential NO NUMBER of people suffering from one is equivalent to one really bad thing
                  2. Utilities are additive in some reasonable way
                  3. My ethical guidelines should be reasonably consistent

                  I’m not positive the first one is the one I want to give up…

                  When it comes to examples like funding medicine, I think my intuition specifically DOES suffer from scoe insensitivity, and the right answer is to maximize quality-adjusted-life-years or something like it, even if it makes us uncomfortable. But I’m not sure if this case is just MORE scope insensitivity, or if there’s a legitimate other moral principle underlying my provisional decision.

                  Reply

                  1. Well, if you give up the third then you certainly won’t have to worry about me coming up with another tricky example :-). However, if you give up the second, you’ll still have to decide where to break the circularity in preferences that I outline above.

                    It’s hard for me to remember what it might be like to believe that there could be two totally different domains of suffering, with an absolute, perfect dividing line between the two, where one morally trumps the other no matter how much suffering is aggregated in the minor domain. On what side of the line are you going to place a graze, a brief static shock, and so forth? I think you see that as soon as you try to do it, I’m going to be able choose the smallest suffering in the major domain and the biggest aggregation of suffering in the minor one such that your intuitions revolt at choosing the latter over the former.

                    Why would you even suspect that it’s more than scope insensitivity? It’s pretty obvious that there’s only one place to break the circle in preferences outlined above, and scope insensitivity completely accounts for why that’s a counterintuitive choice. We’re just not built to reason about choices affecting a googleplex people, so we imagine some much smaller number of people getting dust specks – I’ll generously say the entire population of Earth, though really I think it’s a lot smaller than that – and correctly conclude that *that* doesn’t aggregate to something worse than 50 years of torture. But why should such an intuition be a good guide to a harm measured by such a ludicrously, impossibly vast number?

                    Really, I see why people revolt at choosing torture at first, but I just don’t see how anyone resists the conclusion once they’ve seen how impossible the other options are.

                    Reply

                    1. That’s really interesting. I’m not yet convinced by the content of your post, but I do find fairly convincing your experience that apparently you used to be in a same position as me, but now find it totally incomprehensible. Which IS a sign of many things which make sense but the person doesn’t “get” yet. (Not always, but often.)

                      I mean, I totally agree that the incremental changes from torture to 2 people experiencing 0.99 torture, etc, etc, sounds convincing to me. And that making an artificial dividing line into two (or more) categories of harm is immediately exploded by considering the incremental change over the boundary, and totally not a resolution.

                      But I also know humans are also bad at incremental change arguments eg. you presumably know the flaws in an argument going “is one grain of sand a pile? no? how about two? if you have not-a-pile of grains of sand, does adding one grain make it a pile? no? so by induction NO number of grains of sand is a pile? ok? well, how about 50000000 grains of sand? it is? well, where’s the mistake.”) In that example, I think many people find it hard to point to the mistake, although I assume it’s obvious to you, at least in general terms — but also, I think people are right that if they’re sure 1 grain isn’t a pile and 1000000000 is, then the mistake must be somewhere in the middle, even if they can’t tell where the mistake is.

                      Which makes me think there may be a mistake in the incremental change argument, even though it sounds rational. (Although that’s only a possibility, it’s equally or more likely I’m just wrong but don’t realise it yet.)

                      Are we certain utilities _can_ be added? (Hold on, I remember a link about this, but I have to find it.)

                      For that matter, maybe no system of morality would give us satisfying answers in that case, we’re lucky that systems of morality serve us as consistently as they do. We can always make them fit by giving up successively more of our intuitions of what’s right, but at some point, we might have to admit that there is consistent system we’re happy with (I don’t WANT to, I HOPE that’s not the case, but…)


                    2. To be fair, I don’t think I was wholly in your camp, but it used to seem to me like a tricky moral conundrum where there were arguments both ways.

                      You’re right that the Sorities paradox is used to argue for all sorts of nonsense. Usually however all that’s required to push back against it is to observe that “pile” doesn’t have to have a clear defining line, and in fact almost no words admit of one; everything has edge cases, and that’s OK, the words are still useful. Here, however, far from rescuing you, that observation demolishes your case; the distinction you want to make can only survive if there’s a perfect bright clear line between dust specks and torture, and it’s obvious there isn’t.


                    3. Or to put it another way, I think utilitarianism is very very correct to think in terms of “good done and harm done” rather than in terms of human virtues and vices. But it seems like any attempt to total them in a reasonable fashion always produces unpalatable results — results unpalatable even to philosophers.

                      Of the (few) explanations I’ve heard, I’m not convinced there is a way of adding up utilities that we can agree “works”.

                      For instance, everyone agrees:

                      U(1 person 1 dustspeck) < U(1 person 2 dustspecks) < U(2 people 2 dustspecks each) and U(1 person 1 dustspeck) < U(2 people 1 dustspeck each) < U(2 people 2 dustspecks) But do we agree on the relative utility of U(1 person 2 dustspecks) and U(1 person 2 dustspecks)? I agree they must be somewhere between U(1,1) and U(2,2) and simply adding them up works in many cases. But are they even comparable? (I genuinely don’t know.) The link I was thinking of was from your link to lesswrong, http://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem. Which suggests under some reasonable assumptions, utilities can be assigned real numbers[1]. But I haven’t read it yet.

                      [1] There’s not necessarily any way of comparing utilities between people, but for the dustspecks, it’s problematic enough even for people who DO put the same utility function on the things we’re considering.


                    4. I’ve linked to that directly here btw.

                      I basically support the VNM framework for thinking about these things, but you don’t have to swallow all that, or figure out how best to aggregate the difficult instances you talk about, to see that the SPECKS position is not sustainable. You just have to see that I can build a very fine grained continuum between the two, and put a factor of a billion disparity of aggregation between each very close pair in the continuum, and so build the chain above, and therefore your only options are:

                      (1) Find a place in this continuum where you draw your line
                      (2) Accept having circular preferences
                      (3) Accept TORTURE (ie that torture is the better option)

                      Which of these do you think you’ll end up going for? Or is there a fourth option I don’t see?


                    5. BTW, thank you for an interesting and hopefully productive discussion 🙂

                      I’m still thinking this through as I go, so I may throw some things out there that are wrong, or persuade me you were right to start with, and I hope this doesn’t seem frustrating, I’m not trying to dodge the issue 🙂

                      But as a thought-experiment (I’m not seriously proposing this, but want to see what happens), can we consider different ways of adding utility?

                      We’re currently supposing that U(N dustspecks[1]) is N*U(dustspecks). But suppose instead the correct addition were something like (10-9/N)*U(dustspeck). Then the utility of two dustspecks would be about twice that of one, but however many you had it would never be more than ten times worse.

                      On the one hand, you could say it’s ridiculous that one person’s marginal utility would depend on how many other people are in the same situation.

                      On the other hand, you could say that any number of people having internet connections doesn’t outweigh a requirement for everyone to have clean food, water, and shelter.

                      Is this simply going to produce mathematically inconsistent results? If not, I’m sure it will give unpalatable results in many cases, but perhaps DIFFERENT places to the usual answers. But conveniently, will it give a nice mathematically precise answer of where the changeover is?

                      (Having two sorts of utility, one infinitely less than the other, would be a similar sort of utility-adding trick, and it would work the same way, except that we DON’T accept there’s a special level where everything above it outweighs everything below it.)

                      Conversely, what if we simply said the utility of the person with the lowest quality of life outweighs everyone else’s combined (and the second-least outweighs everyone else, etc)? I don’t think most people would accept it, but would it be consistent, and if so, would it answer “CHOOSE DUST SPECK” without contradiction?

                      [1] That is, not of N dustspecks all at once, but of “dustspeck” as a single event with known utility, and N occurences of that to separate people (or possibly one person at separate times).


                    6. You’re right, I’m assuming in my chain of preferences that if you prefer 1 person to have a graze than a billion get lots of dust specks, you’ll prefer 100 people to have a graze than 100 billion get lots of dust specks. Given 100 opportunities to swap one for the other, you might choose to flip only some N of them, where 0 < N < 100. That would seem to me an extremely weird position to take; it in no way one that would be directly suggested by your intuitions about the badness of choosing TORTURE, and it still leaves you choosing a billion people suffering some awful thing over one person choosing some very, very slightly more awful thing.


                    7. Sorry, the above comment is very unclear! I’ll try again when I get a moment. And yes, thanks for discussing this with me!


                    8. OK, let me try this again.

                      My chain of preferences contains questions like: is it better that a billion people experience a billion seconds of torture, or that one person experiences a billion +0.01 seconds of torture? And I’d assumed that people’s answer would be a simple “yes” or “no”. If it’s yes, then aggregation is simple; we just imagine a billion separate instances of the exact same question, and answer “yes” to every one. But there’s another possible answer, which is that it depends on how many people are already experiencing X seconds of torture.

                      This would seem like a very weird position to take to me; it’s hard to see how believing this could “flow from” your intuitions about torture and dust specks, more like a last bulwark against accepting a distasteful conclusion than a genuinely appealing alternative moral position. More importantly, it doesn’t really get you out of the problem, because it still leaves you counterintuitively choosing a billion people experiencing a billion seconds of torture over one person experiencing a billion + 0.01, but now you’re even more weirdly choosing this only sometimes, not all the time.


                    9. Thank you!

                      I agree it’s weird and counter-intuitive, but surely the point of the discussion is that we’ll have to accept something counter-intuitive, and we have to decide what? I’m not sure that this approach is right, just that it’s an alternative approach.

                      It seems to me the agenda for this sort of discussion is to play with various intuitions, and see if we share them or if not, narrow down the point of disagreement to something we can examine, and hope that is edifying to both of us even if we still agree. And in playing with them, see if we can choose ones we agree on to base complex extrapolations on, and if we agree with the extrapolation steps.

                      This would seem like a very weird position to take to me

                      I agree that it seems weird, especially when put like that. But even if it’s not exactly right, it seems to echo something many people instinctive believe.

                      more like a last bulwark against accepting a distasteful conclusion

                      It may well be. But hopefully now it’s pointed out we can evaluate it and accept it or dismiss it 🙂

                      it still leaves you counterintuitively choosing a billion people experiencing a billion seconds of torture over one person experiencing a billion + 0.01

                      Only if you choose the extreme function which totally ignores “not-quite-as-bad torture” (can we please not use time to measure badness of torture, since time can be a proxy both for “worse torture” and for “more torture” which are the two things we’re trying to separate?)

                      Surely if you use the (10-9/n) function, it will produce results which accord with intuition for people experiencing “slightly worse torture” (ie. it will multiply (?)) and results which accord with intuition for “much much worse torture” (ie. it will minimise the worse torture, completely ignoring the much lesser torture), and it will only do weird things when comparing torture with 2-10 times worse torture, which is exactly where our intuition is so weak anyway?

                      (Note that the 10 is entirely arbitrary for the sake of the experiment, I don’t think there is any exact value you could put there!)


                    10. I find myself randomly re-reading this several months later, and I’m sorry I didn’t keep the discussion going! If anyone wants to reopen this in their journal, let me know 🙂


                    11. Hm, maybe. I’m much less convinced by what I was saying than I was at the time, but I’m quite convinced to plump for infinity dustspecks either…


                    12. Heh 🙂 Don’t say infinity though. Infinity introduces a whole different set of issues that I don’t want to go into. It makes a difference that the number is a googolplex – if it were “only” a googol, then plumping for the dust specks could be the rational choice.

                      Anyway, good to know you’re still thinking about this sort of thing, and I’m up for discussing it further if you’re interested. Thanks for replying!


                    13. First thought I noticed is that my intuition prefers 10^lots of people getting mild positive utility to one person getting 10^lots positive utility, which is consistent with any of the theories of “my intuition is inconsistent”[1], “it’s the lowest utility that matters, somehow” and “utilities can be added and multiplied in the natural way [hence specks worse than torture]”

                      [1] Almost certainly true.


                    14. OK, I came back (after a year :)) and thought through my assumptions. I think I was wrong, after playing with some possible axioms for a relation between non-real utilities, whatever you do I don’t think there’s any sensible utility functions that avoid the dustspeck problem, and I agree any sensible utility function decision must be indepentant of what’s going on elsewhere in the universe, and that’s the sticking point.

                      I still don’t like the conclusion, but it may be inevitable.


  2. The dust speck page seems to be curiously missing any mention of the axiom of Archimedes, even in its comments. Did I miss something that said utilitarianism was inherently forbidden from measuring utility using any system other than the real numbers?

    Reply

      1. I hadn’t seen that, no. Interesting, thank you!

        Though as I understand that page, it’s not assuming the axiom of Archimedes in the same sense – or perhaps I mean in the same place in the system – as I was referring to above, in that it precisely isn’t defining those real-valued utilities as things that can be added together across multiple people. That is, a VNM-rational agent might still perfectly consistently consider the utility of one person being tortured to be lower than that of any number of dust specks, without violating the axiom of Archimedes in the sense it appears among the VNM axioms.

        (In fact, as I read that page, they don’t even have to consider N dust specks to be at least as bad as M < N dust specks. That’s all completely outside the scope of the theorem.)

        Reply

        1. The VNM rules don’t consider at all how to aggregate utility within a given outcome, only across several uncertain outcomes. Still, the implication is that utility has to be a real number; and I think that it’s reasonable to suppose that in the simplest case one would rank a 50% chance of losing two lives equally with a certainty of losing one.

          Reply

  3. I’m totally with you, by the way.

    Morality is a matter of intuitions, and they are individual, but common. People feel like their likes and dislikes are more than that, and thus many people suppose that they _are_ more than that.

    I’ve never heard any good explanation for what “good” might mean that boiled down to more than “The way I would like the world to be”.

    Reply

      1. Oh, I like mine to make a certain amount of sense, but I am generally aware that rules do not map perfectly onto reality, and that I have multiple different starting axioms which are themselves inconsistent.

        I do not have a single root from which all my morals branch, but multiple starting points which intersect to make gnarly organic shapes.

        Reply

  4. Subject: DCT
    I don’t think that your refutation of DCT works, but then it’s a bugger to understand.

    As far as I can tell DCT is premised on the concept that God is the highest moral authority based on his ‘perfect’ nature.

    The concept of ‘perfect’ is this regard means that he cannot, by definition, instruct us to do anything immoral.

    To be ‘good’ is to strive to attain the moral attributes of God.
    To be ‘bad’ is to fail, or to not even try, those same moral attributes.

    In this way DCT avoids the Euthyphro Dilemma by saying that God is ‘perfect’ and would not command a human to do something that was not within Gods nature.

    It’s a bit more complicated than that I’m sure but that’s my understanding of it.

    The main criticism that I have of this form of DCT is that this ‘perfect’ God is hypothetical, I cannot think of a single actual definition of any of the available God’s, including the Christian God, who could be described in these terms.

    Would this hypothetical God have framed the first four commandments in the language that’s used ?

    “I am the Lord your God, who brought you up out of the land of Egypt, out of the house of slavery;

    7 you shall have no other gods before me.

    8 You shall not make for yourself an idol, whether in the form of anything that is in heaven above, or that is on the earth beneath, or that is in the water under the earth.

    9 You shall not bow down to them or worship them; for I the Lord your God am a jealous God, punishing children for the iniquity of parents, to the third and fourth generation of those who reject me,”

    A ‘perfect’ moral God who confesses to jealousy and damn the children of transgressors to the third and fourth generation – really ?

    The hypothetical God sounds like quite a nice guy, the actual versions are something else.

    Reply

    1. Subject: Re: DCT
      Hello anonymous. Who’s that?

      One response to the Euthyphro is to say that God’s commands are not arbitrary, but are the outcome of his Good nature. I think Wes Morriston has the answer to that one: see John D’s posting. Granted that the set of moral properties which are instantiated in God’s nature define “Good”, then they would define “Good” regardless of whether or not God existed, and you fall onto the other horn of the dilemma, effectively.

      Folks like Craig are careful to say that they are talking about moral ontology (“what are moral facts?”) and not moral semantics (“what does ‘good’ mean?”), and therefore instances where God apparently does something which isn’t what we mean mean by good aren’t a problem for Craig’s moral argument: at worst, Craig can just abandon inerrancy, or retreat to the standard “possible greater good, who are you to say otherwise?” defence.

      Reply

    1. Excellent question. I don’t have a short pat answer, but I think it’s related to the answer to this simpler question:

      Why entertain moral hypotheticals *at all*? Why think about trolley problems, or any situation you’re not actually facing right now? Why should a moral system speak to anything except the situation you are currently in?

      Reply

  5. when I say you ought or ought not to do something, I’m appealing to intuitions I hope we have in common… But there’s no reason to suppose there’s more to it than that, moral laws floating around in a Platonic space or being grounded by God

    For what it’s worth, I think this is basically so.

    (There’s also the case of disagreeing about sufficiently complicated but theoretically factual activities designed to produce some result. eg. we might disagree about economic politics because we disagree about which outcome it preferable (eg. how much we value individual autonomy vs individual health). Or we might disagree about what effects policies are likely to have (eg. we both want approximately the same sort of economic recovery, but one of us thinks we should raise taxes and the other lower them to achieve that effect). Of course, in politics, any two people normally disagree about both 🙂 But I think that’s distinct from what you’re saying.)

    So yes. Although also, people do tend to value some level of consistency (for theoretical or practical reasons), so moral arguments tend to boil down to three phases:

    1. Persuade someone action X will have complicated outcome Y.
    2. Persuade them that they have a clear intuition that simple outcome Z is bad.
    3. Persuade them that Y is more like Z than anything else.

    And people disagree about any one of these phases. For instance, in the question of “how vegetarian to be”, often people (including me) have little idea of what sort of conditions most animals are raised and slaughtered in, AND/OR have different ideas about whether the moral weight of killing an animal is equal or greater to killing a human, zero, or somewhere inbetween AND/OR disagree how to generalise that to the meat industry in general.

    And many even more contentious political issues involve really hoping (or using emotional arguments to persuade someone) that someone has the same sort of (b) as you, before trying to move onto case c, sometimes being horribly surprised that they don’t.

    But yes, I agree that often enough people do it’s always worth asking, and in general we have enough in common we can work together, even if we couldnt’ work with paperclip maximisers

    Reply

Leave a Reply to ciphergoth Cancel reply

Your email address will not be published. Required fields are marked *