rationality

apdraper2000 joined the discussion on people who have fully general counterarguments against the opposition, with a link to Peter Suber‘s essay, Logical Rudeness. Suber’s essay is well worth reading.

What Suber calls logical rudeness is a response to criticism which insulates the responder from having to address the criticism. Suber comes up with a taxonomy of logical rudeness:

The primary type is probably the application of a theory of justified dismissal, such as a theory of error or insanity, to critics and dissenters. Another major type is the interpretation of criticism as behavior to be explained rather than answered. This is closely connected to the type that refuses to see a meta-level in the critic’s criticism, and will not allow critics to escape the object-language of the theory. A rude theory may reinterpret criticism as a special kind of noise, or as unwitting corroboration. A theory may evade criticism without rudeness by postponing as answer or referring the critic to the answer of another. The abuse of postponement may be rude, however, as when the motions of postponement are made shorthand for dismissal, or when the subsumption of an objection under a larger system of belief is made shorthand for refutation. A rude theory may be held for reasons other than its correctness, such as the support for the believer shown by voters or grant-giving agencies. A weak sort of rudeness lies in any unfalsifiable theory, and a strong sort lies in boon theories which identify critics as nonpossessors of a special boon. The theories of justified dismissal and the boon theories tell critics that they are disqualified from knowing truth or even deserving answers because of some well-explained foible or fault in themselves. All the types have in common an evasion of a responsibility to answer criticism on the merits, when that evasion is authorized by the theory criticized. All types are triggered only by expounded criticism, and only insulate the proponent from conversion or capitulation, not the theory from refutation.


There’s the potential for this sort of thing in anyone with a belief whose scope is broad enough to explain why some other people don’t believe it. As mentioned previously, some Christians tell atheists that atheists know there’s a God really and are just being atheists to annoy, because they know it teases. Some atheists tell religious people that theists won’t accept atheistic arguments because they’re afraid of death, or too immersed in the church community to bear the social cost of leaving. In a conversation about race or gender, it won’t be long before someone claims another person’s view is held because of their privilege. And so on.

Suber calls this rude rather than fallacious because it is possible for people who hold true beliefs to be “rude” in this way (and in fact, rejecting arguments because they come from rude people is itself rude). Rather, rudeness violates the norms for debate, but by those same norms, we’d like even people who hold beliefs which lead them to be rude to be able to join in.

In Suber’s taxonomy, some sorts of rudeness seem worse for debate than others. Towards the end of the essay, Suber distinguishes “fixed belief” from “critical belief”, the difference being whether the believer is prepared to concede that they might be wrong. Suber says it’s not clear that critical belief is possible or desirable in all cases. In particular, it seems to me that people who regard disagreement as a moral defect will find it hard to be critical believers.

Suber wonders about the value of debate (by which I assume he means the general to-ing and fro-ing of philosophical conversation, not merely formal public debates). It seems to me that this value partly lies in reducing the problems of filtered evidence. We ourselves filter the evidence we search for, but a multi-sided debate might serve to correct this. One way of squaring a desire for debate with beliefs which justify rudeness might be to admit that we hold such beliefs, but to avoid rudeness itself as a tactic. Beliefs which justify rudeness might legitimately influence whether we want to have the debate at all, but once committed, it seems worth holding our own beliefs critically.

Following on from his review of two books by theistic evolutionists, Jerry Coyne recently wrote an article criticising the US National Academy of Sciences for saying that evolution and Christianity are compatible. Richard Hoppe at Panda’s Thumb disagrees with Coyne, but P Z Myers supports him. Atheist fight!

Is evolution compatible with Christianity? Well, yes and no. I was a Christian who believed in evolution. This means not having good answers to some stuff Christians might care about: was the Fall a real event, and if not, where does original sin come from? Did physical death really enter the world through sin? If, as Christians usually argue as part of their theodicy on natural disasters, creation itself was corrupted in the Fall (whatever the Fall was), how exactly does that work? If you’re a Christian who accepts evolution, you don’t need atheists to ask these awkward questions, your creationist brothers (and sisters) will do a much better job of it.

But that doesn’t show incompatibility. If you keep running into these problems and have to keep adding ad hoc patches to your theory, you should consider discarding it, but there are things I don’t have good answers to as an atheist, and that hasn’t stopped me being one.

I was a student of science who was a Christian. That seems to be where the real problem lies. Theistic evolutionists tend to say stuff like “Evolution could have been the way God did it” or “Maybe God nudges electrons from time to time”. They might make a wider point about “other ways of knowing”. At some point, someone is probably going to say “well, Science cannot prove your wife loves you, but you believe that, don’t you?”

The Less Wrong crowd recently discussed whether their community is and should be welcoming to theists. Theism, Wednesday, and Not Being Adopted is a good post which deserves reading on its own merits, but I was particularly interested in Eliezer Yudkowsky’s comment about compartmentalising rationality.

If Wednesday [the child of Mormons mentioned in the article] can partition, that puts an upper bound on her ability as a rationalist; it means she doesn’t get on a deep level why the rules are what they are. She doesn’t get, say, that the laws regarding evidence are not social customs that can be different from one place to another, but, rather, manifestations of the principle that you have to walk through a city in order to draw an accurate map of it.

Sam Harris mocks this compartmentalisation in his satirical response to Coyne’s critics (the paragraphs following “Finally, Kenneth Miller, arrives” are the key ones). Science is one manifestation of the principle that you draw a map by walking the streets, not by sitting in your room and thinking hard about it. There are other legitimate forms of cartography, such as the one you apply when you conclude that someone loves you (assuming you’re not actually a stalker). Perhaps, like the Tube map, they’re not doing quite the same precise measurement as you’d expect from science, but they make useful maps.

Recall the original point of the Flying Spaghetti Monster, before it developed into a cod-religion for annoying Christians with, like the worship of Invisible Pink Unicorn (PBUHHH). The FSM’s inventor used it to point out that if you’re going to say your god created the universe because you sat your room and had a strong inner conviction about it, on your own argument, the FSM revealed to me as a Pastafarian is as legitimate as the creator your conviction revealed to you. This point is not lessened if you say your god sometimes happens to do stuff in a way which isn’t directly incompatible with known science.

Perhaps theism isn’t incompatible with evolution, but it is incompatible with good cartography.

William liked the bit in my last post where I said that most believers are carrying a map of the real world somewhere, because they know in advance what excuses to make for the apparent absence of gods and dragons. Of course, I stole it from Overcoming Bias (mentioned previously here). Carl Sagan’s point in the original invisible dragon story is about falsifiability. The crew over at Overcoming Bias use it another way, to think about what’s going on in dragon-believer’s head when they know enough anticipate the results of testing for the dragon, but not enough to say “there’s no dragon”.

It’s that sort of keen observation that keeps me going back to Overcoming Bias despite all the stuff about freezing your head when you die. The aim of the game for Biasers is to have a map which matches the territory, and to be able to read it aloud. They’ve started Less Wrong, a new site where anyone can contribute something they think will help achieve this aim. It’s based on the code for Reddit, where users can vote stories up or down, though at Less Wrong, the editors manually promote stories to the front page, and there’s a separate page where you can view stuff that’s merely popular. You can follow Less Wrong on LiveJournal by adding less_wrong to your friends list.

The community is working pretty well so far. Watching the decline of Kuro5hin makes me worry that community moderated sites will turn to crap (although there’s still some good stuff over at k5, such as an article about the tendency of community moderated sites to turn to crap), but having real humans in charge of promoting articles might mitigate that. The system has given some new voices a chance, notably Yvain. Here are some of my favourite articles so far:

I’ve made a few comments over there, although nothing earth-shattering: sympathising with someone whose girlfriend left him for Jesus, or talking about Bernard Woolley and irregular verbs.

I’ve been thinking about posting some more about what I’ve got out of Overcoming Bias and Less Wrong here on LJ. It’s all very well ranting about religion, but rationality isn’t graded on a curve. Don’t worry, religion-rant fans: I’ve got a few more of those lined up too.

Some of you already read Overcoming Bias, the blog of Oxford’s Future of Humanity Institute (I’ve seen gjm11 commenting there, and wildeabandon mentioned it, I think). I’ve been reading quite a bit of the archives recently, as evidence by the number of comments I’ve made referring to old postings there.

The bias of the title is cognitive bias, the psychological term for systematic mistakes human minds tend to make. The purpose of the blog is self-help and social change: “If we know the common patterns of error or self-deception, maybe we can work around them ourselves, or build social structures for smarter groups. We know we aren’t perfect, and can’t be perfect, but trying is better than not trying.”

Eliezer Yudkowsky is one of the main contributors there. He’s an interesting character: heavily invested in ideas about the Singularity and Friendly AI. His stuff on Overcoming Bias touches on those interests, but is worthwhile even if you consider such ideas silly (I’m not sure whether I do or not at this point: my instinctive reaction that this stuff is far-fetched may be an example of bias).

What I like about his writing is that it’s usually clear and incisive. He shows a passion for reason (contrary to Star Trek, a passion for reason isn’t a contradiction in terms) and almost a reverence for it. You get the feeling that his SF stuff about Bayesian masters undergoing the Ritual Of Changing One’s Mind isn’t just an illustrative analogy. Coming so soon after I read Anathem, I see the blog as one place where this world’s avout hang out. Stuff like Diax’s Rake would be right up their alley.

livredor once told me that one of my biases is to latch on to someone very clever and align my beliefs to theirs (I think this bias is a common one among technical people who have taught themselves some philosophy). So I ought to be a little careful when I read his stuff. Yudkowsky’s faults are that he’s also self-taught, so needs his (likewise very clever) commenters to point out that he’s covering old ground, has missed out on the standard arguments against his position, or is not using the standard definitions of some terms (such as the case where he argues his moral views are not moral relativism, for example). Some of the postings where he talks about how he used to think a whole load of wrong stuff and now doesn’t can get tedious (ahem). In some cases he’s made extended series of posts where I don’t understand the conclusion he’s trying to draw (the series on morality is an example).

Still, I’m very much enjoying articles like his articles on staging a Crisis of Faith (which isn’t ultimately about losing religious faith, but about changing long-held opinions. It’s good introduction to the blog as a whole, as there are links to many other good articles at the bottom of it), Cached Thoughts, Are Your Enemies Innately Evil? (shades of Bartlet’s “They weren’t born wanting to do this” there), Avoiding Your Belief’s Real Weak Points, Belief in Belief (not quite your standard Dennett argument); and his argument that Elijah conducted the original scientific experiment.

I recommend the blog to you lot. If you like reading blogs on LJ, you can find it at overcomingbias.