Identity, collective action, future generations, population ethics
Whitman’s poem crossing brooklyn ferry shows something about coordinating with the future and caring about it
Why the change from elders past present and future to past present and emerging? Is it relevant to aboriginal conceptions of time or longtermism/future generations?
Read gloors population ethics without axiologies as jumping off point
WWOTF Notes for Thucydides
Daniel Munoz on avoiding transitivity:
-
https://munozphilosophy.com/research - How do different values combine to determine what’s best overall? I got into this question while writing Three Paradoxes, and my hunch was to keep drawing on social choice theory, the study of how individual votes can be combined into a collective choice.
-
In my view, multidimensional values are everywhere in ethics. Their essential mark is nonfungibility: A and B can be good in different ways, so that choosing A is a loss in some respect, even if B isn’t better overall.
-
In Sources of Transitivity (E&P), I argue that multidimensionality, not (just) Temkin’s “essential comparativity,” is the key to modeling nontransitive value relations. (I made a similar argument in Three Paradoxes about “menu-relativity.”) In [WIP], I argue that multidimensionality, not Parfit’s “imprecision,” is the key to modeling “sweetening” cases—and the key to defining Chang’s concept of parity. In The Many, the Few, and the Nature of Value (Ergo), I apply these ideas to the quesiton of whether “the numbers should count” when deciding whom to save. I also claim that different lives have nonfungible values; so, the goodness of each life is measured by its own dimension.
-
But what are dimensions? In Dimensions of Value (Noûs, with Brian Hedden), we put forward a definition: dimensions of value are genuine and distinct ways in which a thing can be good or bad, to which overall value is responsive. We then ask what “responsiveness” should amount to. Working through some examples from Ruth Chang, we end up with two principles: the familiar Strong Pareto (A is better overall if better in one way, at least as good in all others) and a novel hypothesis we call Dimensionalism (how A and B compare overall depends only on how they compare along the relevant dimensions).
-
My influences here are Amartya Sen, Brian Hedden, Kieran Setiya, Tyler Doggett, Toby Handfield, Caspar Hare, Ruth Chang, Walter Sinnott-Armstrong, Derek Parfit, Wlodek Rabinowicz, Larry Temkin, and Richard Yetter Chappell. The elusive John Taurek is a constant inspiration.
Re that talk on states responsibilities over time. How would parfitian closest continuers view work with states that are the closest continuers of victims and perpetrators? How is the persistence of elite society sufficient for the transmission of responsibility, even when the state dies, (e.g. in Japan post-US occupation?) Do we need to incorporate connectedness too?
- The elites function as responsibility-transformers, like electrical transformers
Fabian Beigang paper on avoiding algorithmic fairness impossibility result might transfer to population ethics impossibility results
We can’t know which individuals/persons will be alive in future, but so long as we have sufficient descriptive knowledge of and causal influence on them we can act collevtively with them
How does this link to game theory and the concept of a person being necessary for moral scorekeeping and trust?
Future generations Samuel scheffler (?) People prefer 10’000 small generations over 10 huge ones. Why? Because they’ll be further from the end where the pyramid scheme collapses?
Maybe partly, but also because many projects we care about like intellectual progress need time for evolutionary effects to take effect - e.g. scientists dying, selection of a canon. Plus low hanging fruit effects mean small populations can have disproportionate impacts.
With view that additional value 8n world has diminishing marginal utility, has absurd conclusion that you need to know if tgeres aliens, and how good egyptian lives were to make moral decisions. Could this be fixed by thinking the counterfactual value you add has diminishing marginal utility and/or your obligations for future welfare are only to currently-existing practices, so amount of people living in future doesnt directly matter since they’re just the supervenience base for what you have obligations to.
A2 - Population Ethics: Basically chucking out the axiological side of population ethics by denying world-states have values, and seeing if you can still make anything action guiding and not totally repugnant/incoherent out of it. I think the best way to do this is to focus on reasons we have to care for future people grounded in our co-membership in certain collectives, or co-participation in certain collective actions. This model might produce something like a stepwise discounting of future generations, with scalar discount factors for co-members of each increasingly broad circle. But more likely it’ll look like different classes of reasons holding between you and different classes of future people, (e.g. reasons of reciprocity to the future people who give our lives meaning, reasons to fulfil obligations to past generations via paying things forward, reasons to impartially benefit, reasons to partially benefit, etc.). My concern here isn’t to ‘avoid’ unpleasant impossibility theorems by making a much less action-guiding theory. Instead I want to understand how much of the practical upshots around caring for future generations you can derive without appealing to more controversial aggregative world-states-have-values reasoning. I’m guessing some of the issues here will involve working around non-identity problems and other difficult stuff, and I want the account to at least point at something moderately action-guiding. I’m not sure how I’ll deal with all these issues at this stage. Samuel Sheffler’s work seems directly relevant to this project.
B2 - Theoretical side of A2: There’s lots of relevant analytic philosophy work about collective agents/acting together/social ontology/collective responsibility that’s obviously relevant. I’m sure I’d engage with that if I go with this topic, but at this stage I’m not super familiar with this apart from some of Korsgaard’s (and I guess Charles Taylor’s?) work on normative roles. From a glance Margaret Gilbert and Michael Bratman’s work on collective agency might also be relevant. Outside analytic philosophy, I think it would be interesting to bring in some Confucian/Neo-Confucian ideas around the sociality of the self and how this grounds obligations across generations. Some African communitarian philosophy might be relevant here too. I think the points about membership in different groups grounding different kinds of obligation also lines up with questions about the individuation of virtues and dealing with dilemmas/conflicts between different virtues, but I’m not sure how practically relevant this will be.>)
Balancing partial and impartial obligations may depend on how much autonomy you can/should allow the other party somehow?!? See scanlon, collective agency, autonomy objections to marriage?
Re creating happy people, I dont think we have an obligation to, we just have an obligation to extend and enhance the welfare of collective agents of which we are a part. These include non-coercive communities, like contract partners, mutually editing partnerships like marriage, and certain practices that let us realise our virtues somehow??? These stretch into the longterm future 8n a stepwise fashion, we ought to make the practice continue to flourish, and have members and values such that the practice will continue to flourish. But these forms of software need a particular kind of substrate, and e.g. boltzmann brains are not appropriate, so we dont have an obligation to create more of them.
Obligations that are agent relative vs agent neutral may be split along whether you are the whole agent/part of a larger agent/you contain the relevant subagent. Consider the analogy of harming people one one leg vs another vs harming different people, and the converse of kicking another person with one leg vs another, for rights and duties respectively.
Considering which preference changes count as core vs merely instrumental for the anology to how much autonomy you owe your subagents/co-parts of agents, and maybe what rights you have against the agents of which you are a part. General issue of how to aggregate welfare within an organism across time. Must all agents have complete preferemces in a sense?
Could the asymmetry around suffering and pleasure in population ethics contexts carry over to seemingly non-creative contexts if pain somehow is more identity-destroying/de-informing than pleasure is? Or maybe just that extreme pleasure is identity destroying, and since we only care about making people happy, this means increasing pleasure has diminishing returns even within one life. See the highlights in the archived pocket article about suffering focused ethics for all this.
There are also all the complicated questions about inter and intra personal tradeoffs.