How related are permissible ontological and preference shifts?

The link between moral subjecthood and moral agency is the non-manipulated preferences thing, the non-manipulatedness or autonomy of preferences is what determines their ownership, I.e. who is benefited by their being fulfilled. Actually who does benefit from the satisfaction of a manipulated preference? No-one maybe, since the manipulator only benefits from the satisfaction of their own second order preference for the fulfilment of the manipulated persons preferences (consider the manipulated persons preference change for proof). But how can it be positively having autonomy, not lacking heteronomy, that is the necessary criteria for moral subiecthood re being able to have preferences the fulfilment of which can be good or bad for you, if we want to include Boltzmann brains or simple animals as moral subjects the standard may be too strong. But possibly we can say not being preferred is part of the functional definition of pain, or that those subjects still do have an unmanipulated preference for not being in pain. Likely this all fails because unmanipulated =/= autonomous, a preference can be unmanipulated and exist only due to random chance

Informed preferences, consent, and the shaping of initial preferences by legitimate communities over whom you have no control in your non-existence and infancy. An informative condition on informed preferences must take into account what pre-existing influences on a future persons preferences are legitimate ones. Thus they must consider the nature of moral communities.

Aspiration: pettigrew’s choosing for changing selves on preference change (see also aspiration notes google doc??)

Sources of normativity should give answers for when to acquire new roles which are analogous to when to acquire new desires

Core thread between preference revision, bounded agency, democracy/anti-AI expertise, is the fact that many of our preferemces are conditional or contentless. This problematises preference satisfaction as a criterion of welfare, since preferences arent fully determined (into functions onto world states) by the psychology of the agent, you need to take into account the history and future filling out of the preferences too. People have scripts and roles, not utility functions. The problem with infallible AI doctors and judges is that they cant be negotiated with/punished/entered into contracts with. (Relevant to categorical vs conditional desires in Non-paradigmatic agents)