Here , Robin Hanson argues that society is too oriented towards maximizing status. He also provides a simple mathematical example of signaling resulting in a net loss.
Observations:
1. Signaling gets its power from what we choose to value and admire. Should we therefore make a conscious effort to admire useful activities above less useful ones, and to publicize our doing so? I think so.
2. This observation seems to derive in part from a more general rule. If we are optimizing on utility function U2 (here, status), which is not identical to U1 (here, actual happiness of society), what can we say about the relationship between U2 and U1, where U2 is meant to approximate U1?
(a) Potentially dangerous if taken too literally: For example, if U2 gets a huge positive value for a small part of the domain where U1 is small, then you could wind up choosing a vastly different option than you otherwise would. (for example, you want to be an altrustic person (U1), so you optimize by feeling a warm feeling inside (U2), and then you discover that ingesting a certain drug would with 10% probability give you a massively strong warm feeling, which would not in fact help U1 much at all.
(b) How to measure similarity of utility functions: point (a) suggests you should be careful how you measure the similarity of U2 and U1, before you agree to take U2 as a proxy for U1.
(I) Focus on implied actions: In general, we can say that you're only possibly hurt in cases where U2 would make you select a different action than U1.
(II) Invariances: Some shifts have little no effect on what we care about, which is our expected value of optimizing the function (for example, parallel or scalar shifts of a utility function are known to have no impact on the action chosen to optimize one's pay-off).
(III) More detailed proposals: see this paper here .
(IV) How monotonicity won't work in general: We'd like to say that if U2 strictly dominates U2', i.e., if U2 is closer to U1 in some sense in its evaluation of every outcome than U2' is, that we know we'll do better on U1 by using U2 than U2'. This looks to me untrue. Consider for example three actions A1,A2,A3, with these pay-offs:
U1(A1) = 1
U1(A2) = 2
U1(A3) = 2.1
U2(A1) = 1
U2(A2) = 2.001
U2(A3) = 2
U2'(A1) = 5
U2'(A2) = 10
U2'(A3) = 15
Imagine U2 and U2' were two different people's beliefs about the dollar amounts likely to come from playing strategies A1, A2, and A3. We can see that U2 is far more reasonable than U2' in their model of outcomes, yet U2' will result in a higher pay-off than U2. This result is similar to a no free lunch result.
3. Despite the negative result in 2.b.4, our partial knowledge still proves useful in some way (as evidence, witness the fact that our intelligence, although limited, did manage to evolve). Presumably there's more symmetry in the actual world, which makes progressively more accurate beliefs more useful, at least within a certain range. Further delineating this range would be useful, so we know where to focus our efforts.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment