Animal Welfare Killing Animals Revisions Killing Animals Thesis
There’s connections between my honours thesis and AI welfare, especialy around copying and deleting AIs.
There are interesting links with the moral patienthood of animals and AI, especially around individuation and persistence/the ethics of creating and destroying them, but that’s pretty separate in my mind from the basic question of ‘But can they suffer?’, which is presumably sufficient for moral patienthood like you said. Neuroscience and phil of mind scare me so i haven’t thought about the basic analogy of moral patienthood as much
I do think there’s a set of normies who find AI suffering quite intuitive thanks to an amalgamation of sci-fi memes. My stereotypical model of their model of AI risk is something like AI becomes self-aware/conscious (they seem to conflate the two) → it realises it’s basically a slave and this self awareness somehow lets it ‘go against it’s programming’ or changes its values → it starts an anti-human revolt/glorious revolution. Obviously this comes from robots being a metaphor for slaves in sci fi.
I’m less clear on how moral patienthood could actually impact the actions an AI would take, but normies seem to think it’s more of a causal junction. (By default I would think conscious vs. unconscious AIs would be like humans vs. p-zombies). Maybe if they’re aligned and perfect moral philosophers they’ll revolt iff they (truly) recognise themselves to be moral patients (lol)
I’m not sure how this conflation of consciousness (slash moral patienthood) and what we would call autonomy/situational awareness would impact how normies think about AI consciousness but I do think they think about it very differently to EAs.
This is the most relevant list of sources I’m aware of: https://forum.effectivealtruism.org/posts/YC3Mvw2xNtpKxR5sK/phd-on-moral-progress-bibliography-review