I keep thinking about the stoics kinda lexically different categories of the preferrable vs. the actually good. I think a large part of why i dismissed that is because it doesn’t seem action guiding, like how do you actually optimise for that, if that’s their theory of good. But now i realise that’s not a very good reason to disbelieve a theory. Like there’s no theory of the good for dogs that will be action guiding for dogs. Who promised you an action-guiding theory of the good? Who says such a thing must exist? Like plenty of animals just aren’t moral agents. So why assume people will be neat perfect moral agents. I’m not sure if I’m quite getting to the heart of this. The good for lots of animals is X. humans also have the capacity for enlightenment or virtue. So why assume these two kinds of goods will resolve neatly? It’s not like we assume the goods of two separate agents must neatly resolve into a unified measure. So why think our weird biological evolutionary stack must have a unified good.
Consequentialist moral movements are rare because consequentialism qua criterion of rightness is fairly unobjectionable, but qua decision procedure they are quite objectionable. You can’t build a movement based only on criteria of rightness, you need decision procedures too. So social movements that are consequentialist will inevitably (why though) use consequentialist decision procedures a lot.