Pettigrew will hopefully be very relevant Choosing for changing selves
How do kinds of loss of responsibility/transformation as a result of AI relate to accounts of the badness of death? Killing Animals Thesis Killing Animals Revisions
A nice-ish important model is Edward Hinchman’s work on the unity of interpersonal and diachronic intrapersonal trust
Clues to the right approach to manipulation and taking responsibility arise in the shift from decision theory to game theory, see what the game theoretic account of manipulation is. I think it might look like someone being able to trick you by inverting or interceding between your desires and reasons in the world somehow? Like that smbc comic about driving cars, how you (ought to?) respond ceases tracking reasons but tracks only the other persons decisions? It’s like theres those multiple layers and thats similar to the stuff with a hierarchy of orders of desires in some sense. Dennett might have something about game theory/free will/manipulation?
For prudential reasons and agency questions, really think about the analogies between questions in the philosophy of free will, and how manipulators/luck can sever the morally-agentic relations of ‘responsible for’ between agents and their actions/effects, and how different relations to future selves can be analogous to reponsibility severing between agents and their future selves. You can frame this as whether the future selves are moral subjects for the current agent or not, i.e. whether the current agent has reasons to act to benefit them in particular or not.
first read the SEP articles on personal identity and ethics, advance directives and substitute decision making, personal autonomy, various other SEP parts around pracitcal reason, agency, responsibility, freedom, action, collective action, etc.
All of Lewis’ dispositional theory of value is relevant, esp. footnote 7/pg. 118-19 with desires de se and de dicto and mistakenness about identity+cross-world desires
Check Willem Devries book on Sellars, also garfields edited book on sellars and buddhist philosophy.
Then read Marya Schechtman, Menkiti’s person and community in African Thought, and Gyekye’s response to Menkiti.
Neurotechnologies, Relational Autonomy, and Authenticity
Mary Jean Walker, Catriona Mackenzie - should be a good overview on the connections here with neuroenhancement and bioethics.
Oh shit major problem with how to reconcile 2D semantics/Yablo’s aboutness theory which is much more fine grained with Shoemaker’s dispositional account of properties, which is only intensional, not hyperintensional. How do these two relate to inferentialism? Could we just say the extension, but not the meaning, of properties is a disposition. this would let us differentiate along a 2D semantics line re the different intensions/hyperintensions of property-terms.
Is Shoemaker’s theory of properties even that important here? Well as I put it below, it only needs to be a necessary condition of a property, (being a self), that it have as it’s extension a disposition. this doesn’t say it has to be a sufficient condition, so we could still let two properties with the same extension (disposition), which would fix their intensions too since a disposition is a modal function, same as an intension??? but different hyperintensions???
Check Shoemaker’s Functionalism and Qualia to see how it links up. Check Burge’s Individualism papers (1979, 1986).
Check how Shoemaker responds to 2D semantics arguments maybe. How finely does Shoemaker individuate properties?
Does Yablo go even finer than hyperintensionality?
have a look at the connections between finks, the ability to do otherwise (frankfurt), two-level theories of compatibilism, and Shoemaker’s theory of properties. Use this to reformulate Korsgaards argument in Self-constitution.
could we somehow use all that stuff the prove an autonomy criteria for selfhood/personhood, like Korsgaard does? the idea would be that, unless a self is well ordered, it won’t have the right kinds of disposition (i.e. a non-finkish one) to cause actions, and since, on Shoemaker’s account of properties, it’s essential to a property like ‘being a self’ that it has such a disposition to cause actions, we are only selves when we are well-ordered in this way.
how will this stuff tie in to those earlier questions about particular vs general causation and the nature of the self. if the self is something like a necessary imputation, is it more like a description of a state of affairs than a particular state of affairs? check those notes of causation in the black notebook.
think of the state-self analogy with responsibility. Just as a body of citizens as a whole can’t be punished for the actions of a nation if the nations isn’t democratic, i.e. the actions of the nation aren’t expressive of the citizenry as a whole, or perhaps the will of the citizenry, a person who is acting non-responsibly isn’t expressing themselves in their actions for the same reasons. What are the differences? The parts of persons can’t be punished without punishing the whole person, you can’t just punish the desiring faculty of a person for example, but you can punish parts of the citizenry of a nation. But why is this? Just because a person is more unified/densely interconnected than a state? But of course this is a matter of degree, we should be able to use Parfitian example to move smoothly from one level of individual connectedness to another. What effects on our judgements about group/individual responsibility have?
- seems harder to punish a corporation by forcing it to keep existing, whereas you can do that with a person, because their lives can be net negative. Additionally, it seems like the fairest ways to punish collective agents are related to dissolving them or limiting their powers. This is kinda analogous to killing people. But this is often good for a person. So can dissolving a corporation ever be good for the corporation?
- When can collectives be attributable for an action but not accountable for it? In cases of manipulation and luck the same as people? What is it for a collective to do something out of bad luck? What about the 3 kinds of moral luck?
questions about collective action as necessary for human agency in some sense; see Austin’s remark about public statuses, stuff on collective action, Brandom’s remarks on Hegel/the constitution of Geist/Karen Jones and Michael Smith on the necessity of trustability for personhood/Neo-Confucians on role ethics/Korsgaard’s Normativity on the same topic.
Also check George Herbert Mead/Habermas on the sociality of the self
potential problem for Korsgaard defining the self as the cause of actions; aren’t the actions always going to be overdetermined? i.e. the cup still would have been pushed, (broad description of action) if only 95% of your self had caused it.
Remember Parfit’s teletransporter when you aren’t destroyed by accident at the departure station, and people then try to kill the non-departed you. How do we preserve that non-transitive ‘is the same person as’ stuff again? by mimicking the causation/responsibility relation I guess.
You could write a personal identity and genre fiction series with responsibility investigated by a murder mystery, love by a romance story, and something else by scifi - maybe fission and fusion stuff/non-transitivity idk.
A difficult case is splits in romantic relationships, how do the marriage vows hold up with split persons lmao. with both partners splitting? Interesting that G goes different ways with the fission and teletransporter cases.
How do time-lags matter on the teletransporter? there would momentarily be two different people if it’s not instant destruction right? I guess you can get around it with norms about bringing people into existence and stuff. Damn gotta read Parfit.
Check x phi on different intuitions on personal identity - esp looking forward vs looking back on teletransporters.
If you’re a functionalist about consciousness and the function is a causal function, why would you believe in the TDT style of psychologism about personal identity where an involuntarily resurrected algorithm copy is you?
From here down Love in the Time of Teletransporters
A crucial consideration is how manipulation that brings you into existence vs. modifies you while you already exists changes which agents are responsible for the actions/outcomes. Consider not just uncontroversial examples of death and birth, but also teletransporter, fusion, fission, etc. Also try to map on moral luck, ability to do otherwise, in a big multi-dimensional matrix of possible thought experiments, then weight them accordingly based on debunking arguments and virtue.
Oh dear what if you die accidentally but then get resurrected like swampman. :( Isn’t that just the same as Roko’s basilisk copies?
Continuity of responsibility for responsiveness qua control and the necessity of continuity/lack of funnels/widening gates for personal responsibility and free will. (See freedom of the will and the concept of a person)
- God cannot trust us because he is not vulnerable. If we can’t be vulnerable to something, maybe we can’t trust it either. Is this the epistemic sense of possible?
- The point is people say we cannot trust AI because it isn’t transparent to us. But ‘m not sure if this is the relevant part. I dont know of we could trust something that’s both totally transparent to us (and determinate), like a compulsively honest person. If it were totally understandable, predictable, had no akrasia, etc. would we really be vulnerable to it, and hence able to trust it? Vulnerable to it’s what may be the question, e.g. do we have to be vulnerable to it as a whole somehow, or vulnerable to it’s volition. The lack of common understanding seems more relevant The Alignment Problem and AI safety notes Commentated Trust Essay
How does this relate to the ethics of human enhancement - love drugs
E.g. taking a drug that induces love for a person no matter what ones initial state effectively removes ones freedom to value them qua them. Because what you value and their objective qualities drop out of the simplest explanation, you only need to talk about the effects of the drugs. (How does fiction relate to love drugs and simulated value?)
Means teletransporters make you survive, but involuntary reanimation may not - at least certainly not after suicide. Something to do with the dead being unable to update rationally while dead means the responsibility breaks when they are dead through a major change in the landscape of objective reasons over time
How does the funnel/gate stuff explain taking responsibility for another via manipulation?
How do roles fit into this qua korsgaard and kongzi?
Social roles and robustifying local dispositions somehow though scripts and self-identification. These somehow lower the very high bar for being responsible for actions and future person stages?
How do consent, voluntariness, and foreknowledge fit into the responsibility preservingness of transformations.
For example, how does consenting to unknowable transformations effect whether you’re responsible. Is this a case of taking responsibility come what may for your transformative experience?
Does this solve for the degree of contingency in our actual real world transformations?
With respect to the funnel stuff and personal persistence, there could be an argument like this. (problem is it seems equivocating between continued existence and personal persistence is likely here - check exactly what kind of personal persistence must be valued under CIV)
\1. You have to value your own continued existence as a way to achieve your goals in some ways - see convergent instrumental values for example.
\2. Your own continued existence requires a lack of funnels and gates in transitions in responsiveness.
\3. Therefore you need to value sensitive value revisions.