You are viewing squid314

Jackdaws love my big sphinx of quartz - Principle of charity and n-step theories of mind [entries|archive|friends|userinfo]
Scott

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Principle of charity and n-step theories of mind [Oct. 17th, 2012|02:35 am]
Previous Entry Share Next Entry
[Tags|, ]

I had a hard time writing this article, and I decided to try expressing some concepts in pseudocode. There are at least three problems with this. First, it looks pretentious. Second, I have zero programming training and the code is probably terrible. Third, I hate it when other people try to express simple non-mathematical concepts in math or pseudocode and I imagine other people will hate mine. If you hate it, let me know in the comments and I will try not to do it again. While I'm disclaiming, let me also apologize for pretending that "mental algorithms" are a real thing, and for being super-sloppy in pretending to represent them. I just had a really hard time putting this idea into language and settled for poor communication over none at all.

EDIT: wildeabandon suggests is-ought distinction might work better than code/variable distinction.


Principle of charity

Suppose you're a Democrat who supports Obamacare. In fact, suppose we have an exact printout of your mental algorithm, and it reads:

D: support($POLICY) iff (helps_people_in_need($POLICY) = 1); helps_people_in_need(Obamacare) = 1

That is, you enact a policy if and only if the policy helps people in need, and you believe Obamacare does this.

You go out and meet a Republican who opposes Obamacare. There are at least two possible mental algorithms the Republican might use to produce this position:

R1: support($POLICY) iff (helps_people_in_need($POLICY) = 0); helps_people_in_need(Obamacare) = 1

R2: support($POLICY) iff (helps_people_in_need($POLICY) = 1); helps_people_in_need(Obamacare) = 0

That is, the Republican could agree with you that Obamacare helps people in need, but oppose helping needy people; or the Republican could support helping needy people but believe Obamacare does not do so.

The Principle of Charity tells us to prefer R2 to R1. This suggests a possible nonstandard phrasing of the Principle: prefer explanations of opposing positions in which they share the same algorithm, but disagree on the values of variables.

Theory of mind

Theory of mind is a concept from the psychologists, who tend to worry a lot about when children develop it and whether autistic people might not have it. It mostly means the ability to model other people as possibly thinking differently than you do.

The classic example involves two young children, Jane and John. An experimenter enters the room and hides a toy under the bed. Then she takes John out of the room and, with only Jane watching, takes the toy out from under the bed and hides it in a box. Then the experimenter brings John back into the room and asks Jane where John will look for the toy.

If Jane is very young or developmentally disabled, she predicts John will look for the toy in the box, since that is where she would look for the toy. If Jane is older, she will predict John will look for the toy under the bed. Her theory of mind has successfully predicted that John, who only saw the toy hidden under the bed, will still expect it to be there, even though she herself feels differently.

Bah. There's a better picture here.

Jane's mental algorithm looks like this:

J: search($LAST_KNOWN_LOCATION_OF_TOY);
$LAST_KNOWN_LOCATION_OF_TOY = box


Jane correctly assumes she can model John with her own mental algorithm; that is, he will think the same way she does. However, before she has theory of mind, she assumes she and John share all variables. Once she develops theory of mind, she realizes that although John will use the same mental algorithm, his variables may have different values.

Supposedly theory of mind is the sort of thing you either have or you don't, and as long as you're older than four and neurotypical, you have it. But I wonder. There are some people who it seems just don't get the Principle of Charity, and I'm starting to wonder if there's a theory of mind correlation.

Barry The Bully and the 1-step theory

Let me illustrate with a story that is very loosely based around a fight I vividly remember between a very old group of acquaintances, in about the same way the Chronicles of Narnia are very loosely based on the New Testament:
Barry is a bully. Whenever he doesn't like anyone, he picks on them and insults them until they cry or run away. For example, a while back he did this to my friend Bob, and Bob got pretty traumatized.

Mary and Sarah are not bullies. They're both awesome people, and I like both of them. But unfortunately Sarah hurt Mary a long time ago, and Mary hates Sarah viciously.

One day Barry, Mary, Sarah and I happen to be stuck together. Sarah says something, and Mary, who really really hates her, gives her the cold shoulder and makes it clear that she's not talking to her.

Barry, who likes Sarah, gets enraged at Mary and starts cursing at her, telling her what a worthless person she is and how no decent person would ever treat someone the way she's treating Sarah and how she should be ashamed of herself. This goes on and on, until it gets awkward.

Later, I encounter Barry and he still wants to tell me how much of a jerk Mary is for being mean to Sarah. I point out "Barry, come on. You're mean to people all the time. Like Bob!" And Barry says "Yeah, but Bob deserved it! Bob is a terrible person. Mary is mean because she bullied Sarah even though Sarah was nice!"

Barry's algorithm for being mean to someone looks something like:

B:: bully($VICTIM) iff (hate($VICTIM) = 1);
hate(Bob) = 1, hate(Sarah) = 0


In other words, Barry bullies someone only if he hates them, and he hates Bob but doesn't hate Sarah.

If Barry had no theory of mind at all, he would sit there puzzled, wondering why Mary was bullying Sarah when in fact it was Bob she hated and Sarah looked nothing like Bob. But in fact, Barry does have a theory of mind. He realizes that Mary hates Sarah just as he hates Bob, so he's not surprised when she bullies Sarah. So far, so good. But then I ask Barry: given that you and Mary are following the same cognitive algorithm, aren't you morally equivalent? Barry says no, and we imagine his algorithm as follows:

B2: hate($VICTIM) = $is_bad_person($VICTIM); $is_bad_person(Bob) = 1, $is_bad_person(Sarah) = 0

Here Barry says that he only hates someone if they are a bad person. So he's explaining the values he assigned hate(Bob) and hate(Sarah) in the last step of his algorithm. And in the process, he's rejecting my claim that his algorithm and Mary's algorithm are equivalent. He's saying "No, my algorithm tells me to only hate bad people; Mary's algorithm tells her to hate Sarah even though she's perfectly nice."

But here we have a glaring theory-of-mind failure. Barry is failing to realize that Mary may be following the same algorithm, but with different values in the variables:

M2: hate($VICTIM) = $is_bad_person($VICTIM);
$is_bad_person(Bob) = 0, $is_bad_person(Sarah) = 1


In other words, Mary also hates only bad people, just like Barry. She just has a different idea of who the bad people are.

I would argue that here Barry displays a one-step theory of mind. He's capable of understanding that people can think one step differently than he does; otherwise he wouldn't be able to understand why Mary bullies Sarah instead of Bob. But people who think differently than he does two steps back? That's crazy talk!

Therefore, Mary must be an evil person working off a completely different mental algorithm that tells her to hate nice people. And so he fails at Principle of Charity.

Biased creationists and the 2-step theory

Take a look at this article on confirmation bias and in particular at their discussion of creationism. We imagine the evolutionist who wrote the article to be using an algorithm something like this:

E: support($THEORY) iff seems_true($THEORY) = 1;
seems_true(evolution) = 1, seems_true(creation) = 0


That is, support a theory if and only if it seems to be true, and evolution seems to be true and creationism doesn't, so support evolution.

Now the author of that article certainly has theory of mind. She's not accusing the creationists of lying, promoting creationism for their own sinister purposes even though they secretly know evolution is right. She charitably and correctly models the creationists' algorithms as:

C: support($THEORY) iff seems_true($THEORY) = 1;
seems_true(evolution) = 0, seems_true(creation) = 1


That is, the creationists also support the theory that seems true to them, but it's creationism that seems true in their case.

But just as the difference between Barry and Mary lay in the deeper level of how they calculated their hate($VICTIM) function, so we go a little deeper and investigate how the evolutionists and creationists calculate their seems_true function.

E2: seems_true($THEORY) = good_evidence($THEORY) & confirmation_bias($RIVAL_THEORY);
good_evidence(evolution) = 1, confirmation_bias(creation) = 1, good_evidence(creation) = 0, confirmation_bias(evolution) = 0


In other words, a theory will seem true if it has good evidence and if the rival theory is due mostly to confirmation bias. Evolution has good evidence behind it, and creationism only stays standing because of the confirmation bias of its supporters, so evolution should seem true.

But the article writer is more charitable than Barry. She is not a 1-step theory-of-minder. She understands that, even though the creationists are wrong about it, they do believe they have good evidence for their creationism, and they do believe that the evolutionists are suffering from confirmation bias. So the creationists have use the algorithm:

C2: seems_true($THEORY) = good_evidence($THEORY) & confirmation_bias($RIVAL_THEORY);
good_evidence(creation) = 1, confirmation_bias(evolution) = 1, good_evidence(evolution = 0), confirmation_bias(creation = 0)


Here the author is being charitable enough to agree that the creationists are following the same mental algorithms she is even up to the point of what makes a theory seem true to her. But we can still go one level deeper. Let's investigate this term "confirmation_bias" from the evolutionists' point of view.

E3: confirmation_bias($THEORY) = good_response($THEORY, $CRITICISM) = 0; good_response(creationism, angiosperm_pollen = 0), good_response(evolution, irreducible_complexity) = 1

And here, the author says, the evolutionists and creationists are finally using genuinely different algorithms. The evolutionists only accuse a theory of having confirmation bias if it has no good response to the criticisms against it. Creationism has no good response to the argument presented about angiosperms.

The article author didn't provide an example of a "fact" creationists might bring up that they believe evolutionists have no good response for (note that this is hilariously ironic in an article on confirmation bias of all things), so I chose one myself; the claim that certain structures like the bacterial flagellum are "irreducibly complex" and could not have evolved by "mere chance". RationalWiki, of course, believes evolutionists do have an answer to that one, which I have represented by good_response(evolution, irreducible_complexity) = 1.

But let's go back to the example with the Republican at the top of the page. There are two possible explanations for why the creationists might keep accusing the evolutionists of bias at this point rather than accepting their own bias:

C3-a: confirmation_bias($THEORY) = good_response($THEORY, $CRITICISM) = 1; good_response(creationism, angiosperm_pollen) = 0, good_response(evolution, irreducible_complexity) = 1,

C3-b: confirmation_bias($THEORY) = good_response($THEORY, $CRITICISM) = 0; good_response(creationism, angiosperm_pollen) = 1, good_response(evolution, irreducible_complexity) = 0,

In other words, algorithm a admits the evolutionists have much better arguments that they can't disprove, admits that the evolutionists have demolished their flimsy objection about irreducible complexity, but screw it, they're going to accuse the evolutionists of being the ones with the bias anyway! Algorithm b says that no, the creationists have the same criteria for bias as everyone else, but they believe they're right about the angiosperms and they're right about irreducible complexity.

Ten seconds of Google searching shows that the creationists actually have a very complex literature explaining why they think the evolutionists' angiosperm pollen argument collapses and why in fact they think the pollen record is a strong argument for creation (see for example Pollen Order; Pollen Spores and Modern Vascular Plants in the Cambrian and Pre-Cambrian here). And here's a whole page where the guy who originated the idea of irreducible complexity responds to evolutionists' objections and "shows" why they're "wrong". So ten seconds of searching shows it is painfully obvious that, much as the Principle of Charity would predict, the creationists are using algorithm C3-b, the one that is exactly the same as the evolutionists' algorithm, the one where they agree that you only get to call "bias" if you believe your arguments are better than your opponent's - but they believe their arguments are.

Note for the person (people?) who combs my blog looking for things she can take out of context to discredit Less Wrong (oh yes, this happens): I am not claiming that the creationists are right, or that their arguments are "just as good as the evolutionist arguments" or anything stupid like that. This isn't a post about rightness, it's a post about charity. I am claiming that the creationists understand the principle of "if someone criticizes your theory, you need to refute the criticism or abandon the theory", and that they are using the same coarse-grained mental algorithms as the evolutionists, at least at this shallow three-step-down level.

The n-step theory of mind

The three-year old child has a 0-step theory of mind. She can never imagine anyone thinking differently than she does at all.

Barry the Bully had a 1-step theory of mind. He could imagine someone thinking differently than him, but he couldn't imagine they might have a reason for doing so. They must just hate nice people.

Whoever wrote that wiki page has a 2-step theory of mind. She can imagine creationists thinking differently than she does, AND she can imagine them having reasons to do so, but she can't imagine them justifying those reasons. They must just not care if their arguments get demolished.

How deep does the rabbit hole go? I think I have at least a 3-step theory of mind; when I read that RationalWiki article I immediately thought "No, that's not right, the creationists probably believe they can support their own arguments". I don't know if it's possible to have an unlimited-step theory of mind. I expect it is: I don't think more steps take more processing power past a certain point, just willingness to resist the temptation to be uncharitable.

I think if someone did have an unlimited-step theory of mind, the way it would feel from the inside is that they and their worthier opponents have pretty much the same base-level mental algorithms all the time, but their opponents just consistently have worse epistemic luck.
linkReply

Comments:
[User Picture]From: toothycat
2012-10-17 10:39 am (UTC)

(Link)

Thanks for the interesting read :)

I noticed a couple of name swaps - you used "Sally" for "Jane" in the Jane-John example. And later on, when discussing Barry, Bob, Sarah and Mary: "He realizes that Mary hates Sarah just as he hates Bob, so he's not surprised when she bullies Mary" - shouldn't that be "he's not surprised when she bullies Sarah"?
[User Picture]From: squid314
2012-10-17 10:57 am (UTC)

(Link)

Thank you. Good catches and fixed.
[User Picture]From: wildeabandon
2012-10-17 10:48 am (UTC)

(Link)

I like this post, but I'm not sure that the distinction between components of a mental algorithm and the variables that go into them are quite a clear cut as you're making them out to be.
[User Picture]From: squid314
2012-10-17 10:58 am (UTC)

(Link)

Yes, I agree. But I still feel like there's something meaningful going on here. Can you think of a better analogy that preserves what I'm trying to say but avoids the vague-algorithm/variable distinction problem?
[User Picture]From: wildeabandon
2012-10-17 11:06 am (UTC)

(Link)

I'm not sure that it's an analogy per se, but I think it's related to the is/ought dichtomy, so that someone with an nth degree theory of mind would assume that those who disagree would have similar ought beliefs, but different is beliefs.

I don't know if this actually sheds any light, but I think there's quite a lot of thought that has already gone into is/ought, and someone more knowledgeable than me might be able to make meaningful inferences.
[User Picture]From: xiphias
2012-10-17 11:36 am (UTC)

(Link)

You don't need an n-step theory of mind. All you need to do is be able to understand someone else, understand why they believe as they do, understand what mental structures promote those beliefs, and understand their process.

This happens to me all the time. When I disagree with someone, I can usually drill down to find the disagreement point, which might be a logical or factual fallacy on one of our parts, but is more commonly a postulate conflict -- weighing different postulates differently.

For instance, one of the reasons people oppose Obamacare is because they hold "self-determination" as an axiomatic good. I, too, hold "self-determination" as an axiomatic good, but weigh it differently.

"Self-determination" includes freedom of action, which includes freedom to choose how one uses one's resources. A libertarian argument against Obamacare could state that, even if health is an axiomatic good, and even if Obamacare would tend to promote that good, self-determination is a greater good, and using one person's money to pay for another person's health care harms self-determination.

You can tell if you've correctly deduced someone else's mental structures by testing it out. Basically, you end up running a simulation of their mind inside your mind, and you test to see if you can predict how they will think, before they think it.

When it works, it's a lot of fun -- you end up reading their mind even before they think things.
[User Picture]From: marycatelli
2012-10-17 12:59 pm (UTC)

(Link)

Yes. The support-Obamacare and oppose-Obamacare can turn on principles other than helping people in need.

For instance, someone can support Obamacare because it increases the strength of the Leviathan.
[User Picture]From: xiphias
2012-10-17 02:24 pm (UTC)

(Link)

While that's theoretically possible, I don't that many people consider "strengthening the power and reach of government" to be a per se good -- rather, I think it's more perceived as a method toward good, rather than a good in itself.

In other words, I think that the causality is more likely to go, "Obamacare is a reason to strengthen the Leviathan", rather than "strengthening the Leviathan is a reason for Obamacare." Certainly, that's how I think of it -- I don't consider there to be an inherent value in either big or small government, but rather support whatever model of government I believe would best enable the things which I do consider to be good.
[User Picture]From: oscredwin
2012-10-17 05:59 pm (UTC)

(Link)

I believe a better phrasing of "someone can support Obamacare because it increases the strength of the Leviathan" might be: "X is so important that it can't be left to [not US government]".
[User Picture]From: torekp
2012-10-17 11:40 pm (UTC)

(Link)

But that's not a rephrasing, that's a distinct alternative, if not the very reverse.
[User Picture]From: marycatelli
2012-10-18 10:55 pm (UTC)

(Link)

torekp has it right. Someone might put forth that argument while really just wanting POWER, POWER, POWER for the Leviathan, but it doesn't mean he really thinks X is important.
[User Picture]From: marycatelli
2012-10-18 12:00 am (UTC)

(Link)

There are too many actions which make sense only the grounds that "strengthening the Leviathan" is the true motive. It's when a program has been clearly demonstrated to not help people in need and indeed to harm them that you really have to wonder about motives.
[User Picture]From: xiphias
2012-10-18 01:32 am (UTC)

(Link)

I think that has more to do with the "starting from the conclusion and working backward, discarding all evidence which disagrees" thing. If one has supported a program, then acknowledging that the program has failed means accepting that one has made a mistake, and that one has actually caused harm rather than good.

Nobody wants to believe that they have done harm rather than good, and it's easier to simply disbelieve the evidence than to face up to the fact.
[User Picture]From: marycatelli
2012-10-18 02:27 am (UTC)

(Link)

That would only cover the cases where they don't want to expand the program, only keep it in place.
[User Picture]From: xiphias
2012-10-18 04:52 pm (UTC)

(Link)

My gut feeling is that even that can be chalked up to the perversity, illogic, and need-to-be-right of the human mind, rather than the desire to expand the government. Sure, that's the only LOGICAL reason to do that, but when's the last time you've seen a human do the LOGICAL thing?
[User Picture]From: marycatelli
2012-10-18 10:57 pm (UTC)

(Link)

Principle of charity. Affected ignorance does not affect one's guilt, except insofar it may, because it springs from hardness of heart, mean that the person is more guilty than acting with full knowledge.

Also remember that one can want to expand the Leviathan beyond all reason because one's sloth means not wanting to act on one's own, and imagining that the Leviathan can handle it.
[User Picture]From: nancylebov
2012-10-17 11:37 am (UTC)

(Link)

There's something in there where I think it's easy to believe that the other side isn't as careful about evidence-- they're more likely to start with a bottom line and then look for ways to prove it.
[User Picture]From: marycatelli
2012-10-17 12:58 pm (UTC)

(Link)

Once you come to a conclusion, it's only natural to not want to go through that work again. Regardless of how well-founded it was.
From: (Anonymous)
2012-10-17 02:44 pm (UTC)

keeping conclusions

(Link)

That's true, but it's not just laziness. I can read very good arguements, with examples, evidence, etc. that supports a conclusion, and some time later all I remember is that I now believe X based on a very convincing piece that I can't locate again or recall in detail. Then when I see a counter arguement that is also convincing, I can't well defend my previous position (other than based on my own logic, which may be fine, but not have access to the evidence that supports the premise).
I blame the internet.

Randy M
[User Picture]From: marycatelli
2012-10-18 12:44 pm (UTC)

Re: keeping conclusions

(Link)

Principle of charity!
[User Picture]From: xiphias
2012-10-17 02:27 pm (UTC)

(Link)

It is also important to understand that the exact same thing is ALSO true of one's OWN side. We ALL have a tendency to start with the conclusion and work backward toward the reasons. The scientific method is an attempt to fight this tendency, and people need to be carefully trained in it. And even WITH such training, people STILL screw up.
[User Picture]From: nancylebov
2012-10-17 04:40 pm (UTC)

(Link)

I agree with you.
From: (Anonymous)
2012-10-17 02:53 pm (UTC)

(Link)

Can you change the formatting? Links are in blue as they should be; pseudocode is in blue but should be a different easily-distinguished color; some completely random words are in blue and should be in normal black, possibly italicized if you meant to emphasize them. Thanks.
[User Picture]From: cakoluchiam
2012-10-22 12:14 pm (UTC)

(Link)

My standard for code is a forced monospace font with a blockquote indent. Changing font color in general is bad form, particularly since some people use stylesheets with background colors that could accidentally make your text invisible.
[User Picture]From: erratio
2012-10-17 03:08 pm (UTC)

(Link)

I don't feel like the pseudocode really added anything to your points that the English summary of them didn't do just fine on its own.

I'm not sure theory-of-mind modelling really gets at that kind of reasoning though. A friend of mine who has lousy Principle of Charity could be modelled as something like: there is objective truth to most things. People who deny this objective truth may or may not think they have good reasons for doing so, but either way their beliefs are wrong and that's what matters.
He can model other people just fine, he just puts at least as much weight on the output of their algorithms as on the algorithm itself.
[User Picture]From: nancylebov
2012-10-17 04:41 pm (UTC)

(Link)

I think there's a more general principle lurking here-- as far as I can tell, most people neglect the fact that knowledge has to be acquired rather than just suddenly appearing in mysteriously favored minds.
From: siodine
2012-10-17 03:13 pm (UTC)

(Link)

The discrete steps are wrong. From 1-step to 3-step, all you're doing is modeling another mind more accurately, and so it's a continuum. Maybe you could operationalize varying levels of accuracy, though.

More to the point, I don't see a low level of ToM-accuracy being necessarily something innate (I think people with 1-step ToM can achieve 3-step ToM, so to speak). People with generalized social anxiety model other people as prosecutors ready to pounce on any faults -- to a point this is innate but it's treatable by removing the bias. Members of tribes consume their tribe's propaganda in the form of literature, norms, blog posts, comics, and such. Militaries do this to bring up the kill-to-death ratio in battles. The huzzah-science tribe does this to lower the status of creationists within society as a whole (because they have that kind of power -- otherwise the huzzah-science tribe would be taking the opposition more seriously like the creationists).

So, no wonder then people in these tribes inaccurately model others, because they actually believe the members of the the opposing tribes are the stereotypes their tribe has made them out to be. Of course creationists don't believe they can support their own arguments, they believe Jesus rode a dinosaur. Of course most Muslims are terrorists, 9/11.


And for anyone in such a tribe that begins to stray outside of their tribe's memesphere, they're going to see some very hard, ego-deflating truths.

Edited at 2012-10-17 03:17 pm (UTC)
From: (Anonymous)
2012-10-17 07:41 pm (UTC)

(Link)

The tradition of describing people as having or not having a "theory of mind" is always suspiciously dichotomous, but it's especially so in an article about varying depths of "theory of mind". Isn't it better to treat this as a quantitative trait, where people model other minds at varying levels of detail?
[User Picture]From: squid314
2012-10-18 04:16 am (UTC)

(Link)

Yes, probably.
[User Picture]From: roryokane
2012-10-17 09:39 pm (UTC)

(Link)

Here’s a link to the linked “Confirmation bias” article at the time of this post, in case the article is edited in the future.
From: gjm11
2012-10-18 12:10 am (UTC)

(Link)

It feels odd that you're apparently doing both of the following things.

1. Endorsing an approach where one assumes that others (even, e.g., young-earth creationists) understand basically all the same things as one does oneself, but just happen to have unluckily encountered various bits of misleading evidence.

2. Classifying people according to the depth of their theory of mind, which sure looks to me like saying that some other people *don't* understand basically all the same things as you do yourself.

These two aren't quite inconsistent with one another. For instance, you might argue that there's good evidence that (e.g.) that RationalWiki author has a theory-of-mind deficit, on which you're basing your criticism, but that there isn't good evidence that young-earth creationists have the sort of confirmation-bias problem of which that article accuses them, so the author isn't basing their criticism on good evidence.

But it looks to me as if the RationalWiki author can be defended about as well as the creationists can -- e.g., perhaps what s/he really means isn't that no creationist has ever attempted to address what evolutionists say about angiosperms, but that creationists in general don't or that their attempts are so appallingly broken that nothing but severe bias would make them seem credible, and if s/he means one of those things then I see no particular reason to diagnose a theory-of-mind failure. Not, at any rate, any more reason than there is to diagnose one on your part when you accuse them of one.
[User Picture]From: squid314
2012-10-18 04:18 am (UTC)

(Link)

Good point. On the other hand, would you explain Barry's response the same way? Do you think such an explanation would be correct?
From: gjm11
2012-10-18 08:18 am (UTC)

(Link)

It would depend on the actual details of the situation. In this particular example you've kinda prejudged this by saying that Barry "is a bully" and Sarah and Mary "are awesome people" (do those reflect different cognitive algorithms, too?), but in practice I know that Barry and Sarah and Mary and I can't all be right about one another, and I'd probably attempt to apply the PoC to all parties on pragmatic grounds. (Which would mean at least *considering the possibility* that any of them really is a Bad Person in some usable sense. Though personally I find the notion of "bad person" almost wholly useless and anyone who makes serious use of it is again probably running different cognitive algorithms from mine.)

Perhaps I should add that for my own definition of "awesome", it's difficult to reconcile Mary's alleged awesomeness with her "hating Sarah viciously".

In general, I think different people really just do run different cognitive algorithms (as in fact you clearly agree from your description of Barry, and what you say about the author of that RationalWiki article), and whether or not it's reasonable to diagnose a particular situation in that way depends a lot on the details. I bet it's reasonable less often than most of us tend to feel, which is one reason why the principle of charity is useful and necessary. Overdiagnosis of different cognitive algorithms has multiple possible causes, and some of them are just a matter of bad epistemic luck rather than theory-of-mind failure.
[User Picture]From: ari_rahikkala
2012-10-18 04:55 am (UTC)

(Link)

But what if my theory of mind is expressed in a referentially transparent language? :(
[User Picture]From: cakoluchiam
2012-10-22 12:24 pm (UTC)

(Link)

Wouldn't that imply that you would require perfect knowledge of the base nature of the universe to evaluate? (In which case, wouldn't that be a theorem of mind?)
[User Picture]From: eyelessgame
2012-10-18 01:45 pm (UTC)

(Link)

I presume you have read Lewis Carroll's What The Tortoise Said To Achilles, right?
From: (Anonymous)
2012-10-19 07:12 pm (UTC)

luck

(Link)

And that epistemic luck means that at some n your algorithm changes?
[User Picture]From: Julia Wise
2012-10-20 02:34 pm (UTC)

(Link)

Thank you, this is useful.

I get frustrated when smart people say things like, "Some people just don't believe a woman has a right to control what happens in her own body." It seems like an almost deliberate failure to use theory of mind. Even if they have never had a conversation about abortion with a pro-lifer, it doesn't take much imagination to come up with potential reasons one might be pro-life other than "I believe women should not control their own bodies."