| Comments: |
I have a preference over ice cream flavors. Using the VNM axioms, you can measure how strongly I prefer chocolate over vanilla, measured in units of percent-risk-of-coffee-flavor. Similarly, I have a preference over experiences; and you can measure how strongly I prefer dancing over dishwashing in units of stubbed toes. But given these two measurements, you can't generate an exchange rate between chocolate and dancing without an additional judgment on my part, and there's no principled way to make that judgment.
So it seems to be with interpersonal utility comparison. The failure modes commonly ascribed to interpersonal utility comparison could also be applied to internal utility comparison; just replace the Utility Monster with a Utility Tumor, and all preferences are replaced with maximizing probability of one thing. But each person can generate some sort of exchange rate between all their preferences, and we accept this as authoritative (unless fragrantly stupid) because they're better positioned to know what they should have than we are. But when deciding on the exchange rate between A getting chocolate instead of vanilla and B getting to dance instead of dishwash, there is no longer an obvious person to make authoritative.
Except, of course, for the person making decisions that affect A and B. And while it's not a principled or formalizable method, I think what we really do is blur out details until we find something sufficiently analogous to terms in our own utility functions, and then act as though others shared our utility functions, with in-category substitutions like which ice cream flavor is which. But when we encounter a preference that can't be readily mapped to one of our own, we value it at the same level as aesthetics, our weak, catch-all preference.
in the same way that believing each person to have one breast and one testicle would still allow correct calculation of the total number of breasts and testicles in society. [nitpick] Men have breasts! [/nitpick]
"His name is Robert Paulson"
[nitpick]And people who've had mastectomies have fewer than 2! Unless they've had restorative surgery![/nitpick]
[nitpick]And trans people may have neither set or both sets![/nitpick]
Granted, I have no idea what the ratio between transmen and transwomen might be. Perhaps there is no net effect on the population average.
Misc points: * In general allowing people to set their own utility functions leaves things open for exploitation - I'm probably just going to pick whatever thing is at the top of the list of things I might get and tell you I value it to the exclusion of all else. For example, it would be trivial for me to arrange my claimed utility function to value sex with person A more than person A values not having sex with me. I'm not sure how to correct for this, but it might be doable. * A similar problem definitely has a solution: http://en.wikipedia.org/wiki/Scoring_rule#Proper_scoring_rules . A proper scoring rule, of which we have some examples, is such that you maximize your score by presenting your true expectations. It would be nice to have a thing whereby you maximized your utility by presenting your true utility, but unfortunately this is a game (in the -theoretic sense) and so this is much, much harder. * I think, whatever you do, you're forced to assign everyone the same amount of Utility Points. Not doing so leads to optimizing yourself for whatever case gets extra Points to spend. * You've implicitly accepted a variant preference utilitarianism, where we attempt to satisfy people's personal utility functions. Not all strains of utilitarianism work like this, and it's not obvious to me that they should * You haven't even really touched on average vs total utilitarianism.
"I think, whatever you do, you're forced to assign everyone the same amount of Utility Points"
This is not a well-defined suggestion. I'm guessing that you mean that for each person's utility function, the sum of the utilities of each possible outcome is a constant (e.g. everyone's utility function sums to 100 utils across all states of the universe). If you meant something else, please explain further, because I couldn't figure out any other way to interpret your suggestion. I assume you also want no outcomes to have negative utility? Otherwise, you can still make your utilities much greater in magnitude and still get them to add to 100. What if there are an infinite number of possible outcomes? In this case, the sum of the utilities of each possible outcome might not even converge, so there is no way to normalize the sum to 100.
Even in cases where there are a finite number of outcomes, a simple example is enough to show that the suggestion makes no sense. Suppose there are 2 people: Alice and Bob, and 3 outcomes: X, Y, and Z. Alice likes Z best and is indifferent between X and Y. Bob thinks Z is worst and is indifferent between X and Y. Intuitively, it doesn't look like this gives us reason to prefer Z over X and Y in aggregate. But if we take your suggestion, Alice's utility function gives 0 utils to X and Y and 100 utils to Z, and Bob's utility function gives 0 utils to Z and 50 each to X and Y. In total, X and Y each get 50 utils, and Z gets 100, and is declared the best option. Effectively, Bob is being punished for his preferences being more easily achievable (there are 2 options that Bob likes, and only 1 option that Alice likes).
From: (Anonymous) 2013-01-22 03:27 am (UTC)
DnDis | (Link)
|
http://dresdencodak.com/2006/12/03/dungeons-and-discourse/ and http://dresdencodak.com/2009/01/27/advanced-dungeons-and-discourse/ were turned into an actual game: http://www.raikoth.net/Stuff/ddisplayer.pdf
![[User Picture]](https://l-userpic.livejournal.com/11807164/1840678) | From: ikadell 2013-01-22 02:27 pm (UTC)
Re: DnDis | (Link)
|
This is so totally awesome. If you guys ever need to test drive it I´m in.
From: (Anonymous) 2013-01-22 04:01 am (UTC)
dk | (Link)
|
"Ordinal" vs "cardinal" is the problem that VNM solves. When people say that it's just ordinal, they mean that you can determine that someone prefers saving the Amazon to landing on Mars, but they deny that you can determine that the person prefers it "twice as much." That is, you can order the outcomes, but you can't quantify them. Because probability doesn't exist. That's why they're called Austrian economists, rather than Austro-Hungarian economists. But, yes, even after VNM, there remains the problem of scaling Alice's preferences to be all stronger than Bob's preferences.
I'm sure you know everything I have to say, but I think your phrasing undersells the role of QALYs in the world today. Some public policy experts actually use utilitarian calculations over QALYs to make policy. Although this sentence contains the phrase "make policy," it is undercut in my ears by the word "experts," which makes me read this as people lobbying for the future use of QALYs in policy. Indeed, your examples sound like starry-eyed academics, who may have great influence over how doctors practice, but little direct influence on explicit policy, particularly money. But QALYs are much more popular than that. Today they are a standard part of how drug companies lobby insurers, especially European governments, to pay for their new drugs.
If we were really clever, we could come up with a curve representing the utility of money at different wealth levels, and use the utility of money transformed via that curve as a third.
It's standard in economics to use a wealth utility function of the form f = sqrt(x) or f = ln(x), or more generally f = x^c for 0 < c < 1 a constant and x = total wealth. The Wikipedia article on utility is misleadingly abstract.
I have some articles on the subject written up but can't blog them until I find and learn to use a decent graphing software system.
From: (Anonymous) 2013-01-22 04:44 pm (UTC)
| (Link)
|
Oh, really? A progressive tax wouldn't make sense with those utility functions.
-Nisan
Why not? The main criteria you want in a wealth utility function are f increasing and f' decreasing. It's easy to show that progressive taxation doesn't affect that. Welfare traps are a different matter, of course.
I'm thinking that a flat tax rate of r gives each person approximately -rxf'(x) utilons. If f is logarithmic, the flat tax would be fair in that it would give just as much disutility to the rich as to the poor. A progressive tax rate would be unfair to the rich.
So I'm thinking that someone who thinks that a progressive tax is fair must think that the utility function grows even slower than a logarithm.
If f is logarithmic, a flat tax removes a constant amount of utilons. The rich person with 10 utilons loses one; the poor individual with two utilons loses one. I'm not sure that you can even meaningfully reason in those terms, but it's certainly not clear to me that that's a 'fair' situation.
Logarithmic utility has the problem that destitution is rated at -infinity utilons, of course.
If you use power-function utility, a flat tax multiplies all utilities by r^c, which seems fair as a first pass.
From: (Anonymous) 2013-01-23 06:36 am (UTC)
| (Link)
|
You and gjm have changed my thinking on this matter.
-Nisan
Or have an explict term in their global utility function for equality (hence, in particular, not have a global utility function that's just a sum of individual agents' utilities).
Or think that richer people will be more effective at minimizing their tax burden, and therefore want a nominally progressive system with the goal of getting approximately flat payments in reality.
Or value poor people more than rich people for some reason.
Or think that, quite aside from individual utilities, transferring money from richer people to poorer people tends to have a stimulative effect on the economy because poor people spend more of their money than rich people do.
Or, I suspect, quite a lot of other possibilities. So it's far from clear to me that there's any real conflict between preferring a progressive tax regime and thinking that utility grows roughly logarithmically with wealth.
(My own intuition, incidentally, says that utility typically grows a bit slower than logarithmically for perfectly selfish agents -- the step from a net worth of $10k to a net worth of $1M seems bigger than that from a net worth of $10M to one of $1B -- and almost linearly for perfectly altruistic ones because they can use N times as much wealth to help N times as many people. I'm not sure what the right way is to combine these for real people who are mostly selfish but a bit altruistic. You might think it would be large_constant * sublogarithmic + small_constant * linear, so that the linear term would dominate, but that may be overoptimistic about altruism levels.)
From: (Anonymous) 2013-01-23 05:32 am (UTC)
| (Link)
|
The goal is not some abstract ideal of fairness (ask the fat man how "fair" the solution to the trolley problem is). The goal is maximizing utility.
We tax people because we need money to run the government (and the tax-funded government provides net positive utility). The idea is to get that money while minimizing the disutility of taxation. At a given level of revenue, a progressive tax will produce less disutility than a flat tax for any wealth utility function f(x) with f'(x) decreasing.
Also, I hate to point this out, but the problem of utility scaling with personal wealth does solve, almost tautologically, most of those other problems--partially and probabilistically, but that seems to be the best we can do a lot of the time. What I mean is, people with a lot of money have demonstrated that they are less likely to do stupid, destructive things with it, and conversely struggling to accumulate wealth conditions people to avoid stupid, destructive behavior. Money acts as a rationality signal. There are tons and tons of caveats and circumstances where this breaks down, but I'm skeptical that anything more efficient is likely to come along (pre-singularity anyway). It's basically the Economic Calculation Problem.
The UK's National Health Service explicitly uses QALYs (and cost-per-QALY) to make decisions about health care treatments. (See for example this page from NICE, the body that assesses new treatments and decides whether the NHS should pay for them: http://www.nice.org.uk/newsroom/features/measuringeffectivenessandcosteffectivenesstheqaly.jsp). It's not perfect, and subject to local variation and political pressure (eg a standard approach for manufacturers of treatments deemed to be too expensive is to lobby politicians and campaign to have the decision overturned, and this sometimes works), but I'm fairly impressed.
I don't think even the most conservative mathematician could figure out a plausible way to make the utilitarian costs of gay marriage appear to exceed the benefits.
That's a very interesting question. I agree with the sentiment, that gay marriage does a lot of good and doesn't really do any harm, and that's why it should be enacted. However, some people would disagree that adding up the utilities is the way to do something, that seems to be why I disagree with them in the first place. (I would argue that even if you take a more personal-virtue approach to ethics, gay marriage is still massively the way to go.)
On the other hand, there are times when I do agree with quibbles along the lines of "OK, it looks utility-positive, but I'm pretty sure X is going to come back and bite us later, even if I can't explain exactly why."
It's easy. Just note that straight couples outnumber gay couples by a factor of roughly 50:1, that's about two orders of magnitude. If you can argue that the harm done by gay marriage to straight marriage is even 5% as bad as the harm done to gay couples by denying them marriage rights, it's a no-brainer. And 'civil partnerships' go a long way to offsetting the right-hand side of that inequality.
From: (Anonymous) 2013-01-23 07:59 pm (UTC)
| (Link)
|
Personally, I think that if people were able to choose their culture more, a lot of similar problems would disappear. But I don't see that happening.
people would be forced to say which term in the equation was wrong, instead of talking about how the senator proposing it had an affair or something.
You don't really even believe in this sentence yourself, do you? ;)
>Just as you can assign logarithmic scoring rules to beliefs to force people to make them correspond to probabilities, maybe you can assign them to wants as well? So we could ask people to assign 100% among the five goods in our basket, with the percent equalling the probability that each event will happen, and use some scoring rule to prevent people from assigning all probability to the event they want the most? Mathematicians, back me up on this?
Yes, something like that will work in the abstract. Here's how you force an unrealistically rational agent with preferences that are linear and increasing in the amount of each good it has give you its exchange rate amongst the goods. Give the agent a lump of clay to be divided into piles corresponding to each good. The agent will be rewarded with an amount of each good which is the log of the amount of clay it put into that pile. The solution which maximizes value is to allocate the clay proportionally to how much the agent values each good.
This works because if the agent values apples, say, twice as much as bananas, the point at which there are no opportunities to gain value by moving clay from one pile to the other pile is exactly the point at which there are there is twice as much clay in the apple pile as the banana pile, because this is the point where adding a smidgen of clay to the apple pile will yield a number of marginal apples which is half as much as the number of marginal bananas you'd get if you added the clay to the banana pile. This happens because (d/dx)ln(x) = 1/x, or in english, that because of our scoring rule the marginal amount of some good you can get by adding another smidgen of clay to its pile is inversely proportional to the amount of clay already there.
The practical approach that I've had in mind is a simpler / less mathy version of your suggestion. Give everyone a list of (say) 15 things which are good in diverse ways, and about the same order of magnitude (you could also include some bad things and flip the sign). Have each person rank the 15 things according to their preferences. Define whatever item they rank as 8th to be 1 util for them, and assume that everyone gets the same value from 1 util.
Now you can use standard tradeoff questions to sketch out each individual's utility function, and then calculate away with this cardinal interpersonal utility function (made from ordinal preferences). The advantage of using just the median item (in addition to its simplicity) is that some people might have weird, extreme reactions to a few of the 15 items, but the strength of their preference for their median item should usually be roughly similar to other people's (assuming that people are roughly similar to each other).
the strength of their preference for their median item should usually be roughly similar to other people's
Not if people's priorities follow something like a power law, and the exponent varies. e.g. compare religious ascetics who value God >>> everything else vs. cosmopolitan try-anything-once types.
From: (Anonymous) 2013-01-31 04:19 pm (UTC)
Question | (Link)
|
Hi,
I have a quick question about your blog, would you mind emailing me when you get a chance?
Thanks,
Cameron
cameronvsj(at)gmail.com | |