Conceptual engineering and moral concepts

Headline: Conceptual engineering as mechanism design, or, the microeconomics of concepts.

  • (also contract design is similarly relevant)

Check Lesswrong stuff about being dumbasses w/ concepts, only considering their epistemic role, not their use in social interaction. basic point is a disagreement between a Wittgenstinian account of concepts (enhanced with economics), and an information-theoretic account of concepts.

Concepts can make interactions more efficient by removing all these sources of efficiency, and removing all these sources of efficiency provide desiderata for conceptual engineering that go beyond simple compression/information content.

Consider the use of concepts like reliability and trust in making ‘markets’ between agents more efficient by reducing the incidence of these inefficiencies: Are agents always optimal

The very general point seems to be that shared concepts can operate like currencies, i.e. a medium of exchange, or a mutually-legible price system or something. It’s like they make it clearer to agents what each other wants and what goods they’re offering. Consider the roles of the price mechanism and consider how concepts (embodied in public language) can serve the same role.

  • Price mechanism (see also price signals and price systems, these may be a bit closer)
    • Can operate via automatic bid price/ask price matching, this has very low transaction costs
    • Auctions; multiple bidders make competing offers for a good. Different kinds of auctions have different kinds of efficiency.
    • Insider trading is a problem, e.g. in DARPA’s assassination market for info aggregation.
    • Central to explaining Coase’s theory of the firm
  • Types of goods we could allocate
    • Rivalrous/non-rivalrous/anti-rivalrous (i.e. goods w/ network effects, like norms)
    • tangible/non-tangible
    • private/public
  • Market incentives typically but not always attract productive factors into activities according to comparative advantage.
  • We want to avoid rent seeking
  • Perfect competition:
  • Market distortions:
    • almost all types of taxes and subsidies, but especially excise or ad valorem taxes/subsidies,
    • asymmetric information or uncertainty among market participants,
    • any policy or action that restricts information critical to the market,
    • monopoly, oligopoly, or monopsony powers of market participants,
    • criminal coercion or subversion of legal contracts,
    • illiquidity of the market (lack of buyers, sellers, product, or money),
    • collusion among market participants,
    • mass non-rational behavior by market participants,
    • price supports or subsidies,
    • failure of government to provide a stable currency,
    • failure of government to enforce the Rule of Law,
    • failure of government to protect property rights,
    • failure of government to regulate non-competitive market behavior,
    • stifling or corrupt government regulation.
    • nonconvex consumer preference sets
    • market externalities (The Austrian solutions framed by private property law are analogous to principles like the AAP and ASP for the boundary problem)
    • natural factors that impede competition between firms, such as occurs in land markets

For Trust

  • good = relevantly reliable/3-place trustworthy person
  • consumer = truster, pays the cost of vulnerability/handing over discretionary power to trustee
  • producer = trustee, demands discretionary power, and is compensated by positive reputational effects.
  • It’s an example of the principal-agent problem.
  • Basic question is when to rely and when to trust.
  • Problems we want to avoid and possible solutions:
    • Externalities
      • Negative externalities of production and consumption
      • Positive externalities of production and consumption being underpriced, and hence underproduced. i.e. trustworthy
      • Positional/pecuniary externalities leading to expenditure cascades. i.e. having to make yourself more vulnerable/give up more discretionary power? This could only arise if multiple people are bidding for one person’s trustworthiness. This in turn could only happen if trustworthiness is rivalrous or excludable or something. 3-place trustworthiness might be excludable, but rich trustworthiness isn’t.
      • Solutions
        • Stronger property rights lets producers capture positive externalities and prevent consumers free-riding on positive externalities/creating negative externalities.
        • Lowering transaction costs can make mutually beneficial trade more attractive
        • Producer can stop positive externalities of information leaking by being secretive.
          • Funny that the information wants to be free idea points to the fact that information will therefore be undersupplied since it’s a positive externality/public good. Does knowledge want to be free
        • Some regulator, i.e. the community, preventing or punishing the production of goods with negative externalities, i.e. false trustworthiness.
        • Pigouvian taxes (taxes equal in value to the negative externality)
    • Principal-agent problems
      • How to avoid the moral hazard of the risk-taker (the trustee) taking risks with the trustor’s planning needs? How can we avoid these incentives without raising oversight costs? (Moral hazard is caused by hidden actions on behalf of the trustee)
      • How to avoid adverse selection? (Adverse selection is caused by hidden information on behalf of the seller-trustee)
      • Multiple principals problem:
        • Principals can bribe trustee to follow their interest. One principal can free-ride on their duties re governance of the trustee. Principals can accidentally double-up monitoring inefficiently.
          • Looks like we always want one and only one principal to govern the trustee, and we don’t want the trustee to serve two masters. This could be helped by norms to altruistically punish untrustworthy people even if you’re not trusting them. Why can’t we altruistically punish unreliable people again?
          • Monitoring costs aren’t just the actual costs of monitoring, monitoring also reduces the variance of the quality of outcomes. Alternately it stops the trustee from innovating, which, depending on the upside and downside risks, can be bad.
      • Solutions:
        • commissions, profit sharing, threat of termination, (increased by reputational damage) etc.
        • The more responsive the trustee is to compensation, the stronger and more effective the incentives to performance, (like commissions) should be. Idk if we can make them more responsive like this somehow? Stronger incentives for performance = trustee internalises more of the risk.
    • Contract theory solutions
      • In many incomplete employment contracts, the mere threat of termination is sufficient to ensure compliance.
      • Property rights (rights of discretionary use) matter in cases of incomplete contracts, this is (part of) why firms exist.
      • Relying on someone is like contracting on the open market, trusting them is like forming a little firm with them. This is because trust involves discretionary power which is like property rights?
      • Ownership of assets should be allocated to whichever party’s investment is more important since the owner gets the discretionary power. Suggests trust is justified in cases where the trustee’s investment in the relationship is more important? Weird.
    • The assignment problem of 3-place-trustworthy people to trusters (https://en.wikipedia.org/wiki/Assignment_problem) and many other assignment/matching problems in combinatorics mentioned at the end of the wikipedia article. Algorithms to live by notes
    • Whatever problems credit (i.e. trusted third parties) solves
    • Uncertainty about the trustworthiness of the trustee = asymmetric information. So the trustee should send some costly signals to the trustor like the trustor does with vulnerability. According to Spence 1973, these signals must be easier for trustworthy people to send than for untrustworthy people. What could this signal be?
      • This reduction in information asymmetry helps avoid adverse selection wherein very trustworthy people leave the market because no-one is willing to pay them enough discretion/honour?? Idk it doesn’t really seem like trustees are trying to maximise their profits, they seem more altruistic than that. Well maybe not in all cases. I guess people are often trustworthy in minor ways to non-intimates due to external regulation, e.g. they’re trustworthy wrt not assaulting you. These external regulations work like lemon laws.
      • Apparently screening works better than signalling when the less-informed party acts first, (i.e. the trustor acts first).
    • Do we care more about being betrayed than disappointed because that’s effectively the severing of a collaborative relationship. Ties into badness of losing relationship?

Check also Ruth Millikan on unicepts to get a more accurate model of concept’s relations to language. See paper towards fin de siecle ethics for assessment of Moores open question argument and how limit concepts fit in as a solution to the problems of the semantics of value

I wonder if the economic literature on self-recommending and self-stable voting rules/club formation, etc. would provide good models for antifoundationalist philosophical methodology like conceptual engineering and Quine’s web of belief.

Just because there is no joint for a concept to carve at, doesn’t mean we shouldn’t carve. Consider splitting a block of wood, just because there’s no priveliged right place to split it doesn’t mean it shouldn’t be split.

The case of someone being lied to but not acting differently than they would have otherwise is like a gear that has come loose but continues to turn in place due to it’s contact with the other gears, it’s fragile, and dependent on the truth-trackingness of the other gears.

Criticise survey based x phi that targets limit concerts like ethics since because they’re limit concepts achieving the same understanding of them is difficult for some reason? Why would having the same understanding of limit concepts be hard? How are limit concepts to be characterised in terms of eigenvalues/Bayes nets/CRS/inferentialism/Millikan’s work on unicepts? Look into the neuroscience of concepts - especially ethical concepts (and maybe basic concepts in theoretical and practical rationality?) Lesswrong and Oxford handbook of moral psychology for sources here.

re inferential role semantics, how does this relate to distributional semantics?

There’s also the point that moral philosophers are better rationalisers, so you should expect them to actually do worse on moral outcomes than plebs, but they perform the same. So there is some countervailing positive treatment effect, (ok - or maybe a selection effect) that keeps them at baseline instead of backsliding. If we could isolate this we’d be golden.

The difficulty of reaching the research frontier in any one area of ethics/philosophy, plus the necessity of rigidity for fruitful conceptual engineering, plus ethics being the practical limit concept (in some sense) meaning all other concepts must be taken into account when conceptual engineering it means solving ethics is computationally intractable. This justifies the ‘science will never solve ethics’ belief and explains the reaction to any Q* engineered reduction of an ethical concept that you can always ask but why THAT precisification of Q? Probably is an alternative explanation to Kierkegaard’s ‘ethics bottoms out in arbitrary choice’ explanation of that reaction.

How do limit concepts differ from foundations? In their fixedness or know ability or definiteness or something. How is this contra nagarjuna or quine on ethics?

How does this stuff relate to knowledge and epistemology? Dunno but check this out

How do idealisation prcesses fit into the weil/korsgaard framing of meta-ethics? They have to max out both korsgaardian strucutre and weilian attention? Or are proper structure and attention both defined by idealisation processes?