Here’s what I currently think about morality. Perhaps the pros in the audience can tell me whether this has a name.
It is very hard to come up with a general theory about what makes something right which isn’t open to objections based on hard cases where the theory contradicts our intuitions (consider the dust speck problem if you’re a utilitarian; or the thought that God might command us to eat babies, if you’re a believer in Divine Command Theory). Russell Blackford says:
I’ve seen many attempts to get around the problem with morality – that it cannot possibly be everything that it is widely thought to be (action guiding, intrinsically prescriptive, objectively correct, etc.). In my experience, this is like trying to get rid of the bump in the carpet. The bump always pops up somewhere else. We have to live with it.
Call something an “intuition” if it’s something we just seem to feel is true. Perhaps there’s no empirical evidence for it, perhaps it even doesn’t seem to be the sort of thing there could be empirical evidence for. There’s a question of whether intuitions are reliable, but since one of the things we want from a theory is that it seems compelling, and the abovementioned problems with the conflict of theory and intuitions typically result in us not finding the theory compelling, a successful theory seems to involve satisfying our intuitions (or at least, our meta-intuitions, the means by which we can change some of our intuitions, since there are convinced utilitarians who really believe in torture over dust specks, Divine Command Theorists who believe God can justly command genocide, and so on).
In the case of free standing feelings that we ought to do something regardless of other benefits to us (assuming that such feelings exist and aren’t always just concealed desires for our own benefit, for example, the pleasure we get by doing good), it seems that our upbringing or genes have gifted us with these goals (even in the case of the pleasure, something has arranged it so that doing good feels pleasurable to us).
Assuming that we could work out any particular person’s process for deciding whether something is right, we could possibly present it to them and say “there you go, that’s morality, at least as far as you’re concerned”. There’s the amusing possibility that they’d disagree, I suppose, since I think a lot of the process isn’t consciously available to us. I think the carpet bump occurs at least in part because the typical human process is a lot more complex than any of the grand philosophical theories make it out to be.
We might also encounter people who disagree with us but are persuadable based on intuitions common to most humans, humans who aren’t persuadable, human sociopaths, or (theoretically) paperclip maximizers. In the cases where someone else’s morality is so alien that we cannot persuade them that it’d be bad to kill people for fun or turn the Earth and all that it contains into a collection of paperclips, we can still think they ought not to do that, but it doesn’t really do us much good unless we can enforce it somehow. I see no reason to suppose there’s a universal solvent, a way of persuading any rational mind that it ought to do something independent of threats of enforcement.
And that’s about it: when I say you ought or ought not to do something, I’m appealing to intuitions I hope we have in common, or possibly making a veiled threat or promise of reward. This works because it turns out that many humans do have a lot in common, especially if we were raised in similar cultures. But there’s no reason to suppose there’s more to it than that, moral laws floating around in a Platonic space or being grounded by God, or similar.
I don’t find that this gives me much trouble in using moral language like “right” and “wrong”, though to the extent that other people use those terms while thinking that there are rights and wrongs floating around in Platonic space which would compel any reasoning mind, I’m kind of an error theorist, I suppose, but I don’t suppose that everyone does do that.