?

Log in

Mysterianism didn't work either, trying clarity again - Jackdaws love my big sphinx of quartz [entries|archive|friends|userinfo]
Scott

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Mysterianism didn't work either, trying clarity again [Oct. 4th, 2012|09:10 pm]
Scott
[Tags|]

Well, that parable didn't work. People interpreted it as about the dangers of external augmentation technology, which is not really what I had in mind. One person accused me of dystopianism, which is a sufficiently grave (and reasonable!) accusation to merit a response which I guess involves explaining the whole point outright.

If people have an explicit utility function, then they can be modeled as optimization processes trying to maximize that function.

There's only one best way to run an optimization process (bar the super unlikely case where by coincidence the function has multiple maxima of exactly the same height).

We don't choose our desires, at least not at the base level ("We can do what we want, but we can't want what we want"). So those are out of our control.

And we don't choose what, given a desire, is the best way to achieve it. That's determined by the way the world works. So the optimal plan to achieve our desires is also out of our control.

We necessarily choose our actions based on what best satisfies our desires. These may conflict, but in that case the strongest desire wins out. Even if we do something silly like hit our head with a hammer to prove we can do something we don't desire, that just proves that our desire to prove we can do something we don't desire is greater than our desire not to hit ourselves on the head.

So if our desires our out of our control, and given our desires our plans are out of our control, and given our plans are actions are out of our control, then nothing is in our control and our actions are entirely determined.

This is a stronger form of determinism than the usual boring "Yes, we're made out of matter which follows deterministic physical laws" type. Even if we call any calculations that go on within our brain "free will", the result of those calculations is still determined by our utility function and by the rules of strategy.

Even though the "made out of matter" determinism feels like free will, the "actions constrained by desire and strategy" determinism probably wouldn't. We can imagine a person whose utility*plan interactions are whispered to them by a magic earring, or a rationalist savant who performs mathematical calculations each instant to determine what to do, then goes with whatever branch of the decision tree returns the highest utility. Her actions are constrained entirely by the math and she does not subjectively experience free will.

We are saved from this fate by the fact that we're really really bad at perceiving our own utility function and at evaluating competing strategies. Our transhuman successors might not be so bad, and therefore would face this problem. I am unsure whether that would prevent them from being "people" in a vague philosophical sense. Even if people's minds turn out to be mathematical functions, there's a big difference in practice between a person and a mathematical function that just feels like it's a mathematical function.

The parable of the earring was not about the dangers of using technology that wasn't Truly Part Of You, which would indeed have been the kind of dystopianism I dislike. It was about the dangers of becoming too powerful yourself. Such power would probably be useful in an instrumental sense, and in a world like this one where there's a lot of work to be done it would probably be worth using. But in a non-failed world where happiness has become a major consideration, it might be an argument against becoming too formidable.

I don't think anyone in our world right now is in danger of this much formidability. But three hundred years from now when this is the biggest problem of the age, I'm going to say I called it.
linkReply

Comments:
[User Picture]From: selenite
2012-10-05 05:10 am (UTC)
Niven's played with that concept. His Pak are too intelligent to have free will. Since they always see the best option there's no decisions to be made. This comes up explicitly in Protector.
(Reply) (Thread)
[User Picture]From: squid314
2012-10-05 05:24 am (UTC)
Oh. I didn't like Mote in God's Eye or the first quarter of Ringworld, so I haven't really read Niven in depth.
(Reply) (Parent) (Thread)
[User Picture]From: sniffnoy
2012-10-05 06:14 am (UTC)
Ringworld isn't actually very good. I think it was mostly a big deal just for introducing the ringworld concept (which I am still surprised was not done earlier).
(Reply) (Parent) (Thread)
[User Picture]From: nancylebov
2012-10-06 03:47 pm (UTC)
I think the concept was enough to pretty much carry one novel, but the series went downhill sharply after that.

If you want the best of Niven, N-Space was a collection (included excerpts from novels as well as short stories) that made very good choices.
(Reply) (Parent) (Thread)
[User Picture]From: hentaikid
2012-10-05 09:19 am (UTC)
Niven's sci fi is kinda dated nowadays but I was a big fan in the day, the Pournelle collabs not so much. Protector is interesting for the explicit attempt to have super intelligent characters in play. However Protectors have overwhelmingly strong instincts that aren't smart at all so that kind of counteracts their intellect.
(Reply) (Parent) (Thread)
[User Picture]From: mstevens
2012-10-05 09:32 am (UTC)
Personally I like early Niven but not late Niven. Protector is still at the stage where I like his stuff. For my tastes he's better when working alone.
(Reply) (Parent) (Thread)
[User Picture]From: hentaikid
2012-10-05 09:45 am (UTC)
Agreed, though unfortunately collabs seems to be all he's done in the last few decades. Legacy of Heorot with Stephen Barnes is not bad though.
(Reply) (Parent) (Thread)
[User Picture]From: mstevens
2012-10-05 12:34 pm (UTC)
I'm actually kinda fond of Fallen Angels, although it's in many ways terrible.
(Reply) (Parent) (Thread)
[User Picture]From: nancylebov
2012-10-06 03:49 pm (UTC)
Actually, having superintelligences which is limited in that way makes at least as much evolutionary sense as general superintelligence.
(Reply) (Parent) (Thread)
[User Picture]From: hentaikid
2012-10-06 03:55 pm (UTC)
Yeah the Protectors are insanely competitive and xenophobic so they are always fighting and plotting, so the red queen effect wrt intelligence has to be pretty strong. Dumb protectors don't get to pass on their genes.
(Reply) (Parent) (Thread)
[User Picture]From: Risto Saarelma
2012-10-05 05:57 am (UTC)
Kip Werking seems to have had similar ideas back in 2004. (http://www.transhumanism.org/index.php/WTA/more/haldane2004) The article is a bit meandering, but the "Freedom" section has bits about intelligence augmentation eating away free will when it makes you better at reflecting your goal system.

Edited at 2012-10-05 05:58 am (UTC)
(Reply) (Thread)
From: (Anonymous)
2012-10-05 07:08 am (UTC)
So how does the earring decide? What is the process that determines what choice would lead to the highest utility? Even if you have perfect information, you still have to run the calculations/simulate the results of various actions to decide which of them to take. If you offload the decision-making to an outside source, that kills the subjective feeling of free will in *you*.
(Reply) (Thread)
[User Picture]From: torekp
2012-10-08 01:26 am (UTC)
Yep. It's the decision-making that we call choice, not the particular desires or beliefs that go into the decision.
(Reply) (Parent) (Thread)
[User Picture]From: andrewducker
2012-10-05 07:29 am (UTC)
"We necessarily choose our actions based on what best satisfies our desires. These may conflict, but in that case the strongest desire wins out."

Except that "the strongest desire" changes from moment to moment, and is constantly affected by the factors around it (negative consequences, long-term versus short term, etc.)

"The parable of the earring was not about the dangers of using technology that wasn't Truly Part Of You, which would indeed have been the kind of dystopianism I dislike. It was about the dangers of becoming too powerful yourself."

I'm not sure what the difference is, to be honest. All technology is leverage, from the simplicity of an actual lever, via the bow and arrow, to guns, and then missiles you have increases in physical leverage (press a button, destroy the world - maximal destruction for minimal effort), and other technology nicely maps onto similar increases - from spending months painting to pressing a button to capture high-res video, or from the basics of story-telling to IMAX and video games which can cause a day to vanish in a blur if we're not careful.

We find our basic desires (destruction, art, entertainment) and we maximise our ability to satisfy them. And in the olden days it was hard to get enough power to (for instance) have teams of slaves who could entertain you constantly. Nowadays that power is incredibly cheap, because technology has made it so.

So any parable that's about being too powerful is almost necessarily also about technology, because it's technology that allows the average person to get that power.
(Reply) (Thread)
[User Picture]From: torekp
2012-10-08 01:20 am (UTC)
Not only does "the strongest desire" change from moment to moment, but it is also subject to interpretation - and interpretation in this case can have the structure of self-fulfilling prophecy. In those cases it is a form of choice.

Edited at 2012-10-08 01:22 am (UTC)
(Reply) (Parent) (Thread)
From: (Anonymous)
2012-10-05 08:39 am (UTC)
I have to say that was pretty creepy.

My own view on free will is 'Thou shalt not ask, else thou shalt lose it'. I feel like I have free will, and talking about it is a kind of absurd 'fated to talk about free will', so I just do what I want.

For the most part, though, I am pretty much in line with Less Wrong (though I do not expect the singularity to occur or for anybody currently alive to become immortal without the use of cryonics.

An important part of free will to me is being able to optimize by a NON-OPAQUE process. I would determine how I wish to live my life, and judge the input of those who offer advice. I don't want some kind of unknowable guide. That to me seems like a lesser Loss Of Humanity (below Death, Wireheading, and Utility Function Collapse (i.e. modification of utility function to an unacceptably simple or alien form even if it provides more happiness, or coerced modification of utility function)
(Reply) (Thread)
From: (Anonymous)
2012-10-05 08:42 am (UTC)
Desires are to some extent subject to change in accordance with will, at least to the extent of being shuffled up or down the priority tree: most obviously, many people manage to stop smoking.

Slightly more contentiously, I think one could argue that utility functions are granular: if doing A, B or C would all make me substantially happier than I am now, I may well not care that A would make me fractionally more happy than the other two. (This is a variant of "we're really bad at perceiving our own utility function".)

Consider an unsuccessful petty criminal. He may well know that continuing a lifestyle of casual theft and thuggery will continue to get him into the trouble he's been in for some time, and that getting out of it would make his life measurably better, but short-term gains continue to predominate. How does this fit into the model?

-- random Firedrake
(Reply) (Thread)
[User Picture]From: xuenay
2012-10-05 09:38 am (UTC)
Even though the "made out of matter" determinism feels like free will, the "actions constrained by desire and strategy" determinism probably wouldn't.

This sounds like a pretty minor issue. If we have the kind of mindhacking technology that lets us become that powerful, then we probably have the technology to make the "actions constrained by desire and strategy" determinism also feel like free will.

Plus I doubt that people who had grown up always being that powerful would even realize what the problem was - it's only folks who are used to having a different kind of experience that see an issue with it. And even many of them, I suspect, would get used to it after the initial discomfort, even without needing to do any mindhacking.
(Reply) (Thread)
[User Picture]From: platypuslord
2012-10-05 10:12 am (UTC)
Perhaps as we get smarter, our desires will get more complex and interesting and thus harder to fulfill. It's already sort of human nature that, once we accomplish a goal, we set another goal which is more ambitious.

Perhaps as we get smarter, our environment will get more complex and interesting, so any desire will become harder to fulfill. Even given perfect knowledge of my abilities, I still have a great deal of uncertainty about the state of the world.

Many of our goals, even today, are not absolute goals but rather incremental goals: Be the best sports player on the sports team. Be the best musician in the city. Write the best novel. Attract the best romantic partner. If one person were infinitely smarter than everyone else, I could imagine that person getting bored. But if everyone were smart, I expect these goals would scale pretty well.
(Reply) (Thread)
[User Picture]From: cousin_it
2012-10-05 12:07 pm (UTC)
Wait, are you saying that fulfilling my utility function will lead to an outcome that my utility function doesn't like? That can't be right.
(Reply) (Thread)
[User Picture]From: handleym99
2017-01-19 12:27 am (UTC)
It is certainly possible, however, that the entire concept of a utility function is so incoherent that it is not even wrong...
The Utility Function concept assumes certain things about the mind, including that it is unitary and that it is rational. Neither of these is necessarily true.

For example If the mind is more akin to a coalition of independent actors, then those actors can have different preferences, and as soon as you have multiple actors with different preferences, you get all the Condorcet paradoxes so beloved of the voting literature.

Edited at 2017-01-19 12:28 am (UTC)
(Reply) (Parent) (Thread)
From: (Anonymous)
2012-10-05 01:05 pm (UTC)
Has mysterianism ever worked, except to produce entertaining prose?
(Reply) (Thread)
[User Picture]From: UncredibleHallq
2012-10-05 01:13 pm (UTC)
This is going to make me sound like much more of an Eliezer fanboy than I actually am, but this issue seems more or less covered by his posts on Fun Theory (http://lesswrong.com/lw/xy/the_fun_theory_sequence/) and Fake Morality (http://lesswrong.com/lw/ky/fake_morality/).

Basically, if we care about subjective freedom that much (i.e. subjective freedom is in our utility function somewhere), then as long as becoming more powerful isn't a one-way trip (which seems unlikely, given how technology typically operates), then if we ever become too powerful, we'll just cope by voluntarily relinquishing some power.
(Reply) (Thread)
From: (Anonymous)
2012-10-05 02:19 pm (UTC)
I was thinking about this when reading the earring story, but the conclusion of that story confused me. My brain atrophying is very low in my prefference order. So the earring is _lying_ when it says "Better for you if..." because each time it "helps" me I grow to trust it more which eventually leads to a low point in my preference ordering. The analogy would have been better if the ring had had _no_ disadvantages.
(Reply) (Thread)
[User Picture]From: mantic_angel
2012-10-06 12:00 am (UTC)
That's sort of like claiming you refuse to use a calculator, because it would atrophy your ability to do mental arithmetic; or that you refuse to drive, because your legs would get weaker than if you biked.

On the off chance both are actually true for you, I'd point out that society has obviously voted otherwise.
(Reply) (Parent) (Thread)
[User Picture]From: dudley_doright
2012-10-05 07:40 pm (UTC)
a) In a failed world like ours, making ourselves more formidable is, I think, a massively overdetermined moral obligation.

b) For smallish changes within the current mental regime, I don't think it feels like this. I think that acquiring more energy, more will-power, more strategic-ness, etc. feels good, feels like becoming more free. Yes, my beeminder sometimes demands of me that I write, or that I do a block of work and record that I've done so, but having such a tool makes me free to wield the combined strength of my future selves, and that feels pretty good.
(Reply) (Thread)
From: kovacsa
2012-10-06 07:37 am (UTC)
I think you're overlooking one hugely important distinction between the earring-wearer and the utility-calculating transhuman.

The earring-wearer does zero utility calculation himself; he doesn't have to make plans, think, reason, etc, in contrast to the transhuman, who works just like us in this regard. Humans value the process and experience of making decisions internally; the earring destroys this shard of value (setting aside the whether the earring itself qualifies as bearer of humane value). This sort of invalidates the connection/analogy between the two, so I wonder if the "power endangers sense of free will" hypothesis gets support from some other source; I think not.

Free will in a normal human is mostly just the process of doing those utility calculations, so how would doing those calculations better, with more insight, more knowledge about factors and effects etc. would make us feel less free? Rather, I think the more optimization power you can exert in pursuit of your goals the more free you ought to feel - at least this is precisely the way it works currently in real life.

So I think the dangers of becoming too powerful yourself just simply generalizes to the problem of making arbitrary architectural changes to yourself without destroying value and derailing your future, and issues about free will don't seem to merit special consideration here.


Edited at 2012-10-06 07:38 am (UTC)
(Reply) (Thread)
[User Picture]From: Douglas Scheinberg
2012-10-06 07:33 pm (UTC)
It's the video game walkthrough problem again...
(Reply) (Thread)
[User Picture]From: drdoug
2012-10-07 08:28 am (UTC)
"There's only one best way to run an optimization process (bar the super unlikely case where by coincidence the function has multiple maxima of exactly the same height)."

Really? That seems like an extraordinary claim to me. Optimization is a huge and growing field, and so far as I know it is an awfully long way away from a settled view that there exists "only one best way to run an optimization process", let alone converging on what that way should be. I mean, for a given (single-minimum, non-pathological) function, it's almost certain that one of our vast array of optimisation techniques will work fastest, or in the least space, but SFAIK there's no way of determining that a priori.
(Reply) (Thread)
[User Picture]From: ipslore
2012-10-08 04:34 pm (UTC)
Huh. I was reading it as a parable about 'lotus eaters', or the dangers of optimizing for pure, hedonistic pleasure instead of 'higher' values.

(Of course, one major difference is that here the lotus makes you productive and successful, instead of lazy and indolent; but really, that just makes it all the more seductive, doesn't it?)
(Reply) (Thread)
[User Picture]From: marapfhile
2012-10-15 01:09 am (UTC)
Lev Grossman's The Magician King had some brief discussion of this: [spoilers]reality turns out to be largely managed by beings of sufficiently advanced rationality as to have effectively no free will.
(Reply) (Thread)
From: (Anonymous)
2012-10-17 10:40 pm (UTC)
I think Eliezer already sort of discusses this in the Fun Theory sequence.
(Reply) (Thread)