The objective and participant stances The hermeneutics of suspicion Commentated Trust Essay/Trust

Anscombe’s work in intention connecting ethics to the explanation of action (and strawson’s stuff on responsibility), might be a good inroads into thinking about why we care about AI decisions being explainable (beyond instrumental reasons to care about safety) The Alignment Problem and AI safety notes

Brandom’s making it explicit explains the appeal of web3/crypto/DAOs etc., they make social relations explicit (and concrete) in a similar way to logic. That concerns their legibility. But their function is best explained by removing the de jure de facto distinction. De Jure is de facto. This is how algorithmically governed systems (e.g. youtube) work.

I think the point is there must be common knowledge about what excuses are acceptable in order to engage in the practice of action explanation and norm-revision, which is a lot of what we care about as people. This kind of mutual, non-coerced, autonomous norm-revision is related to the AAP and ASP in the democratic boundary problem, and the relation of self-governance to autonomy and human welfare.

(may not really be relevant here) but what’s up with mathematical explanation/2D semantics stuff. How does that relate to the goals of philosophy? How does it relate to conceptual clarity, e.g. https://www.jstor.org/stable/20014420

https://www.oxford-aiethics.ox.ac.uk/blog/digital-rights-international-human-rights-emerging-right-human-decision-maker

Philosophy as forecasting conceptual convergence

The problem of AI systems decisions not being transparent is that they/the norms governing them cant then be revised in a public or democratic way. This is connected to the underdetermination/open-endedness of norms.

  • Also they lack the shared understanding or at least common understanding?