Niven's played with that concept. His Pak are too intelligent to have free will. Since they always see the best option there's no decisions to be made. This comes up explicitly in Protector.
Oh. I didn't like Mote in God's Eye or the first quarter of Ringworld, so I haven't really read Niven in depth.
Ringworld isn't actually very good. I think it was mostly a big deal just for introducing the ringworld concept (which I am still surprised was not done earlier).
I think the concept was enough to pretty much carry one novel, but the series went downhill sharply after that.
If you want the best of Niven, N-Space was a collection (included excerpts from novels as well as short stories) that made very good choices.
Niven's sci fi is kinda dated nowadays but I was a big fan in the day, the Pournelle collabs not so much. Protector is interesting for the explicit attempt to have super intelligent characters in play. However Protectors have overwhelmingly strong instincts that aren't smart at all so that kind of counteracts their intellect.
Personally I like early Niven but not late Niven. Protector is still at the stage where I like his stuff. For my tastes he's better when working alone.
Agreed, though unfortunately collabs seems to be all he's done in the last few decades. Legacy of Heorot with Stephen Barnes is not bad though.
I'm actually kinda fond of Fallen Angels, although it's in many ways terrible.
Actually, having superintelligences which is limited in that way makes at least as much evolutionary sense as general superintelligence.
Yeah the Protectors are insanely competitive and xenophobic so they are always fighting and plotting, so the red queen effect wrt intelligence has to be pretty strong. Dumb protectors don't get to pass on their genes.
Kip Werking seems to have had similar ideas back in 2004. (http://www.transhumanism.org/index.php/WTA/more/haldane2004) The article is a bit meandering, but the "Freedom" section has bits about intelligence augmentation eating away free will when it makes you better at reflecting your goal system.
Edited at 2012-10-05 05:58 am (UTC)
2012-10-05 07:08 am (UTC)
So how does the earring decide? What is the process that determines what choice would lead to the highest utility? Even if you have perfect information, you still have to run the calculations/simulate the results of various actions to decide which of them to take. If you offload the decision-making to an outside source, that kills the subjective feeling of free will in *you*.
Yep. It's the decision-making that we call choice, not the particular desires or beliefs that go into the decision.
"We necessarily choose our actions based on what best satisfies our desires. These may conflict, but in that case the strongest desire wins out."
Except that "the strongest desire" changes from moment to moment, and is constantly affected by the factors around it (negative consequences, long-term versus short term, etc.)
"The parable of the earring was not about the dangers of using technology that wasn't Truly Part Of You, which would indeed have been the kind of dystopianism I dislike. It was about the dangers of becoming too powerful yourself."
I'm not sure what the difference is, to be honest. All technology is leverage, from the simplicity of an actual lever, via the bow and arrow, to guns, and then missiles you have increases in physical leverage (press a button, destroy the world - maximal destruction for minimal effort), and other technology nicely maps onto similar increases - from spending months painting to pressing a button to capture high-res video, or from the basics of story-telling to IMAX and video games which can cause a day to vanish in a blur if we're not careful.
We find our basic desires (destruction, art, entertainment) and we maximise our ability to satisfy them. And in the olden days it was hard to get enough power to (for instance) have teams of slaves who could entertain you constantly. Nowadays that power is incredibly cheap, because technology has made it so.
So any parable that's about being too powerful is almost necessarily also about technology, because it's technology that allows the average person to get that power.
Not only does "the strongest desire" change from moment to moment, but it is also subject to interpretation - and interpretation in this case can have the structure of self-fulfilling prophecy. In those cases it is a form of choice.
Edited at 2012-10-08 01:22 am (UTC)
2012-10-05 08:39 am (UTC)
I have to say that was pretty creepy.
My own view on free will is 'Thou shalt not ask, else thou shalt lose it'. I feel like I have free will, and talking about it is a kind of absurd 'fated to talk about free will', so I just do what I want.
For the most part, though, I am pretty much in line with Less Wrong (though I do not expect the singularity to occur or for anybody currently alive to become immortal without the use of cryonics.
An important part of free will to me is being able to optimize by a NON-OPAQUE process. I would determine how I wish to live my life, and judge the input of those who offer advice. I don't want some kind of unknowable guide. That to me seems like a lesser Loss Of Humanity (below Death, Wireheading, and Utility Function Collapse (i.e. modification of utility function to an unacceptably simple or alien form even if it provides more happiness, or coerced modification of utility function)
2012-10-05 08:42 am (UTC)
Desires are to some extent subject to change in accordance with will, at least to the extent of being shuffled up or down the priority tree: most obviously, many people manage to stop smoking.
Slightly more contentiously, I think one could argue that utility functions are granular: if doing A, B or C would all make me substantially happier than I am now, I may well not care that A would make me fractionally more happy than the other two. (This is a variant of "we're really bad at perceiving our own utility function".)
Consider an unsuccessful petty criminal. He may well know that continuing a lifestyle of casual theft and thuggery will continue to get him into the trouble he's been in for some time, and that getting out of it would make his life measurably better, but short-term gains continue to predominate. How does this fit into the model?
-- random Firedrake
Even though the "made out of matter" determinism feels like free will, the "actions constrained by desire and strategy" determinism probably wouldn't.
This sounds like a pretty minor issue. If we have the kind of mindhacking technology that lets us become that powerful, then we probably have the technology to make the "actions constrained by desire and strategy" determinism also feel like free will.
Plus I doubt that people who had grown up always being that powerful would even realize what the problem was - it's only folks who are used to having a different kind of experience that see an issue with it. And even many of them, I suspect, would get used to it after the initial discomfort, even without needing to do any mindhacking.
Perhaps as we get smarter, our desires will get more complex and interesting and thus harder to fulfill. It's already sort of human nature that, once we accomplish a goal, we set another goal which is more ambitious.
Perhaps as we get smarter, our environment will get more complex and interesting, so any desire will become harder to fulfill. Even given perfect knowledge of my abilities, I still have a great deal of uncertainty about the state of the world.
Many of our goals, even today, are not absolute goals but rather incremental goals: Be the best sports player on the sports team. Be the best musician in the city. Write the best novel. Attract the best romantic partner. If one person were infinitely smarter than everyone else, I could imagine that person getting bored. But if everyone were smart, I expect these goals would scale pretty well.
Wait, are you saying that fulfilling my utility function will lead to an outcome that my utility function doesn't like? That can't be right.
It is certainly possible, however, that the entire concept of a utility function is so incoherent that it is not even wrong...
The Utility Function concept assumes certain things about the mind, including that it is unitary and that it is rational. Neither of these is necessarily true.
For example If the mind is more akin to a coalition of independent actors, then those actors can have different preferences, and as soon as you have multiple actors with different preferences, you get all the Condorcet paradoxes so beloved of the voting literature.
Edited at 2017-01-19 12:28 am (UTC)
2012-10-05 01:05 pm (UTC)
Has mysterianism ever worked, except to produce entertaining prose?
This is going to make me sound like much more of an Eliezer fanboy than I actually am, but this issue seems more or less covered by his posts on Fun Theory (http://lesswrong.com/lw/xy/the_fun_theory_sequence/) and Fake Morality (http://lesswrong.com/lw/ky/fake_morality/).
Basically, if we care about subjective freedom that much (i.e. subjective freedom is in our utility function somewhere), then as long as becoming more powerful isn't a one-way trip (which seems unlikely, given how technology typically operates), then if we ever become too powerful, we'll just cope by voluntarily relinquishing some power.
2012-10-05 02:19 pm (UTC)
I was thinking about this when reading the earring story, but the conclusion of that story confused me. My brain atrophying is very low in my prefference order. So the earring is _lying_ when it says "Better for you if..." because each time it "helps" me I grow to trust it more which eventually leads to a low point in my preference ordering. The analogy would have been better if the ring had had _no_ disadvantages.
That's sort of like claiming you refuse to use a calculator, because it would atrophy your ability to do mental arithmetic; or that you refuse to drive, because your legs would get weaker than if you biked.
On the off chance both are actually true for you, I'd point out that society has obviously voted otherwise.
a) In a failed world like ours, making ourselves more formidable is, I think, a massively overdetermined moral obligation.
b) For smallish changes within the current mental regime, I don't think it feels like this. I think that acquiring more energy, more will-power, more strategic-ness, etc. feels good, feels like becoming more free. Yes, my beeminder sometimes demands of me that I write, or that I do a block of work and record that I've done so, but having such a tool makes me free to wield the combined strength of my future selves, and that feels pretty good.
I think you're overlooking one hugely important distinction between the earring-wearer and the utility-calculating transhuman.
The earring-wearer does zero utility calculation himself; he doesn't have to make plans, think, reason, etc, in contrast to the transhuman, who works just like us in this regard. Humans value the process and experience of making decisions internally; the earring destroys this shard of value (setting aside the whether the earring itself qualifies as bearer of humane value). This sort of invalidates the connection/analogy between the two, so I wonder if the "power endangers sense of free will" hypothesis gets support from some other source; I think not.
Free will in a normal human is mostly just the process of doing those utility calculations, so how would doing those calculations better, with more insight, more knowledge about factors and effects etc. would make us feel less free? Rather, I think the more optimization power you can exert in pursuit of your goals the more free you ought to feel - at least this is precisely the way it works currently in real life.
So I think the dangers of becoming too powerful yourself just simply generalizes to the problem of making arbitrary architectural changes to yourself without destroying value and derailing your future, and issues about free will don't seem to merit special consideration here.
Edited at 2012-10-06 07:38 am (UTC)
It's the video game walkthrough problem again...
"There's only one best way to run an optimization process (bar the super unlikely case where by coincidence the function has multiple maxima of exactly the same height)."
Really? That seems like an extraordinary claim to me. Optimization is a huge and growing field, and so far as I know it is an awfully long way away from a settled view that there exists "only one best way to run an optimization process", let alone converging on what that way should be. I mean, for a given (single-minimum, non-pathological) function, it's almost certain that one of our vast array of optimisation techniques will work fastest, or in the least space, but SFAIK there's no way of determining that a priori.
Huh. I was reading it as a parable about 'lotus eaters', or the dangers of optimizing for pure, hedonistic pleasure instead of 'higher' values.
(Of course, one major difference is that here the lotus makes you productive and successful, instead of lazy and indolent; but really, that just makes it all the more seductive, doesn't it?)
Lev Grossman's The Magician King
had some brief discussion of this: [spoilers]reality turns out to be largely managed by beings of sufficiently advanced rationality as to have effectively no free will.
2012-10-17 10:40 pm (UTC)
I think Eliezer already sort of discusses this in the Fun Theory sequence.