Why cant any action like working on AI safety, be broken down into lots of low odds - high stakes sub actions, which are all pascals mugging at the margin? (Below is Alan Hajek 80k)
Relative utility theory? Maximisation? Gives a lexical rule for deciding amongst infinite ev choices (which everything is if you dont give p0 to st petersburg etc.)
Problems with most similar world analysis of counterfactuals
- if trump or biden had one, the president would be a democrat. Seems false, since you should look at (some) trump worlds, but the closest is a biden world.
- if i were over 7 ft tall, horses would fly, theres no uniquely closest world, bc. Real numbers
- if i were at least 7ft tall, q, seems wrong to say this should be evaluated by looking at only the world(s) where im 7.000000000000… Ft tall.