Multi-agent models of mind and their applications
The point at which moral saints, aspiration, preference change, multi-agent models of mind, AI’s with multiply utility functions, etc. is hidden here somewhere.
Value Change and AI Safety Shapley Values + Game Theory Choosing for changing selves McMahan and Parfit on Summative vs. Global Desire Satisfaction Theories The sociality of personhood, moral knowledge, and responsibility
How do moral saints relate to the meaning of life question? Is meaning the only non-ethical good in life? Why does meaning seem to require a subjective component but ethics doesnt as much? Moral Offsetting
How does report criterion for access consciousness work for ‘it hurts but I don’t believe it’. Also how biased is it generally the propositional thought?
Info sharing of sub agents is needed for being responsible for integrating your sub agents into one self which is necessary for local responsibility, so consciousness is needed for responsibility. Integrated access and reportability ground epistemic role of consciousness?
Virtue and salience ties into virtues for limited agents/bounded rationality - because of limited throughput we must fund the right things salient and compress data into a apt conceptual scheme.
How are subagents individuated? If we think it’s mostly social that pushes us towards the korsgaard/Hegel/Confucius view where they’re individuated by scripts for behaviour in a certain role/context. This is supported by experiences like transference in therapy and regression to childhood in your hometown. Alternately they might be individuated by neurologically distinct processing centres - e.g. for emotions, social relations, sense data, lust, etc. This is a more sophisticated version of the Plato/Aristotle view of the division of the soul. The distinction roughly matches an emphasis on nurture vs nature respectively.
Could it be the case that there are no determinate answers about what we ought to do because of problems with vague objects. I.e. as it’s hard for terms to vaguely refer, obligations or reasons struggle to vaguely apply to agents, or sub agents, or agents qua x role. This might then give us limit cases based on the relation between the subagents/roles, but truth values may be gappy when there are moral dilemmas, or maybe other edge cases. This is relevant for combining utility functions in AI alignment, Susan Wolf’s moral saints objection (from Williams book), and moral dilemmas as an objection to virtue ethics.
https://www.convergenceanalysis.org/publications/investigating-the-role-of-agency-in-ai-x-risk