https://www.lesswrong.com/posts/EByDsY9S3EDhhfFzC/some-thoughts-on-metaphilosophy This gives a range of alternate framings to push against.

Moral progress and philosophy 13 ways of looking at philosophy

Notes on Superforecasting Hedgehog experts do worse than chance and hedgehog non-experts - relevant to ethicists. This supports one big idea system types being worse - Lewis is OK,
Pragmatism means philosophy is forecasting the limit of rational consensus. Since simple algorithms like no change/plot existing trend do really well, they should be very accurate, e.g. Singer. Also supports the idea that small idea historical philosophy like Taylor, but not MacIntyre or Foucault, is quite valuable. How is impact distributed between these different algorithms though? E.g. Brier scores vs deviations from the market.
Moral agnostics work on applied on ethics should be better.
What are the limits of this model? How do ontological shifts between us and fully informed agents, and other reasons full info/unlimited computation/logical omniscience etc. Differentiate us and rational agents break this model?
How to beat the moral market?
Who is aggregating the most in philosophy? People who do interdisciplinary work/work in many areas?
How do correct contractions fit in?
Are philosophers who say their philosophical work doesn’t impact their lives much less trustworthy?

Nash equilibrium vs actual results of financial times 2/3rds poll is super relevant

Assess whether utilitarians were trying to beat the market to a greater extent than others (e.g. church philosophers), then compute how well they did so given the whole sample space.

How do paper-reply chains effect the accuracy of citations as a measure of beating the philosophical market.

Proposal - show super forecasters philosophers surveys plus historical analogues, explain limitations of survey design e.g. selection effects, and unclear questions, then get them to forecast.

Similar stuff applies with whether generalism causes philosophical aptitude, e.g. Descartes

Aww korsgaard will fail :(

How much regression to the mean is there, e.g. once progressive romantic poets (also what means, age? General populace opinions? Philosophical demographic?). This measures treatment effect slightly.

How could philosophical schools have a positive treatment effect - like super forecaster teams improve performance.

How could ambiguity of interpretation, or finding new interpretations of classic texts make marginal returns to rereading go down slower?

If the returns on study are compounding, shouldn’t we expect to see the age that people started learning something relevant to their chosen field be a major predictor of success in that field.

What if the returns are compounding but there’s also diminishing marginal direct returns? Would this support positive treatment effect of polymaths/interdisciplinary work. How might tails come together or apart for philosophy/other discipline pairs?

Check recent grant on expert surveys for tips to improve Philpapers, also how surveys compare to prediction markets,

GPI/FHI working papers on this. Also metaphilosophy, moral psychology anthologies.

What is the price in the marketplace of ideas? How liquid is it qua prediction market? Should we expect super forecasters, teams, prediction markets etc. To do best? Prediction of team success is lack of groupthink, hierarchies, willingness to question, precise questions, etc. So this should go well. However politically or religiously selected teams won’t do well since even if there’s no hierarchy in the team, they’re beholden to a mental hierarchy.

Less public identification with identity relevant issues avoids undercorrection.

What are philosophical trends or base rates?

Search LW AND forum for marketplace of ideas, predicting philosophy, etc.

How to stop philosophers getting increasingly overconfident like cops re lying.

Can contrast this stuff with the interesting defences of super-charitable historical interpretation given in minds almost meeting Should charitable interpretation preserve truth or virtue

The efficient market of ideas

Can questions be broken down is a shared component of research rigidity, that AI safety HCH imitation method, and the usefulness of big history, you need to trust the distilled aggregates of other peoples work or something

  • AI safety researchers don’t stack LW post

Lol: https://arxiv.org/abs/2406.20087