Probably worth going through this and similar notes and just asking if {list of people with productive AI ideas - who is this?} have said anything that tests/extends/contradicts/formalizes these ideas

Intelligence Curse notes

Maybe ask to discuss economics of AI and human empowerment (and what governance and comms and technical interventions it advantages) in the weekly meeting. I think that would be fun and useful. I basically just want to use them as sensemaking meetings.

hmhmhmhm. So we started talking about the relationship between productivity gains from AI and employment levels from AI. How do they relate? what makes them each of the 4 low-high pair combos? What do you need to model to explain the 4 low-high combos? It seems like it’s transaction costs between humans and AIs vs. AIs and AIs, plus AIs are willing to pay more or less than 2kcal for a human-day-outputs worth of any of the goods in the AIs consumption basket. So it’s not just comparative advantage, you also need to ask about absolute human productivity? advantage? for the goods in the AIs consumption/investment basket. Sometimes the horse isn’t worth the hay. You can look from the supply side and from the demand side.

  • But humans on their own are so unproductive compared to human firms, thats why there’s firms, so you probably need to think about human only vs. ai-human vs ai only firms productivity and transaction costs with AIs. More generally, secretes of our success, unfortunately AIs probably beat us on all that RIP.
  • Caring about what humans demand form humans in the long run is probably not sustainable, most demand is ai demand. But what’s wrong with a niche market? I guess you just want humans to maintain a bunch of capital as a proxy for power. Lol maybe you need to care about human-ai trade deficits and tariffs. They’re a country of geniuses in a datacenter after all lmaoooo. You don’t want a trade deficit of power-fungible goods with AIs under conditions of competition.
  • see Baumol’s Cost Disease, AI & Economic Growth https://dominiccoey.github.io/essays/baumol/ is very clear Anyway carrying on, how do you think about the levels of competition (and hence selection and incentives for power-seeking) you want between AIs and in the overall economy? (will AIs powerseek from a systems perspective is useful ofc). What are the successful forecasts of this, and what are the solutions to this kind of thing? There are principle agent problems here at the micro level, and at the macro level I guess it’s more like class or faction struggle than war with external powers bc. of how embedded AIs will be in our lives and economy. So there’s the federalist papers (see the upcoming riff on them too). There’s separation of powers with Montesqieu. Other realists? Grotius? 17th century wins again gang gang.
  • meta question of who has made good long-range forecasts about the balance of power? Mandeville? Montesqieu? What about in China, India? Ask Claude ofc.
  • More currently there’s the federalist papers riff, I think foresight institute thing? The various AI scenario forecasts/stories. Bostrom’s deep utopia.
  • I DON’T WANT A SCREEN ACROSS THE FUTURE i WANT A STORY RREEEEEEE

Education might end up looking like finishing schools since we will all be like upper class ladies (or athenian citizens?) re our job prospects / levels of wealth. (S’s idea)

Distributed training algorithms seem riskier than distributed inference algorithms maybe? Unless we get more online learning ig. But yeah distributed inference is plausibly important for robust universal basic provision of compute. I’m kinda suspicious that data centre owners would graciously let the citizenry control/lock off the comute in their facilities. Ig you could have govt run data centres but still it’s not much better.

Idk where the notes went but too harsh chip restrictions would increase China’s motive to invade taiwan right. Even then they could theoretically still be net stabilising insofar as ASI e.g. disrupts second strike and breaks MAD.

If we consider ai as a high level programming language, what can we say about it’s economic impacts, especially on programmers, and it’s usefulness more generally? Think about compensation in competitive markets, small efficiency gains by using closer to the metal languages are worthwhile and well compensated. But if you just want to do stuff with programming, it’s usually better to just wait for the usasbilit and speed of the tech to improve, rather than learning machine language or whatever. The same is likely true of AI.

I think the social aspect of media consumption will keep blogs, podcasts, etc. competitive over AI alternatives even if the AI is aesthetically better. Some evidence for this is the prevalence of ‘authentic’ parasocial content over aesthetically superior content on social media. In fact, we might even see a kind of new AI aestheticism (AIstheticism), where a certain class of people pride themselves of liking art for art’s sake, contra moralists, historicists, and romantics. Aesthetics and disinterested liking

But anyway this suggests that AI won’t significantly replace traditional human-authored media until new generations grow up naturalised to it. I don’t think this desire for the human-parasocial needs to be innate, but it does seem hard to reverse post-adolescence.

Consider AI vs DAOs and blockchain as the alternate operating systems of the future. Blockchain is legalist, AI is Confucian. Blockchain is precise, transparent, and contract based, AI is imprecise, opaque, and judgement based. What will be a better interface between parts of the world? What does the Confucian/legalist debate imply about this?

Re: simulators in LLMs, how much is this just like the Freudian/Schopenhauerian model of the unconscious/the will vs the ego? What other antecedents and analogies of this idea are there? How does ELK as a research paradigm mirror Freudian psychoanalysis? What about Jung instead?

This is the same pattern as Weird old philosophy of mind

What will remains scarce once intelligence is too cheap to meter, and what does this imply about which things to invest in (in a broad sense) now?

  • Status
  • Land
  • Positional goods more generally (though you need to keep track of which positional goods AIs and humans will compete over)
  • one of the above is status, if status comes from having human followers, which seems true, being a patron/feudal lord may persist.
  • Context
  • Taste
  • Various knowledges/skills?
    • Math
    • Writing
    • Art
  • Various relationships
  • Having children
  • Meditation/headspace
  • Memetic immunity
  • Mental models
  • Personal immortality (literally or through writing, at least temporarily)
  • Share of memespace for some group (assuming human attention isn’t radically augmented)
  • see Traits That May Cease to Be Valuable - Quarter Mile https://quarter—mile.com/Traits-That-May-Cease-to-Be-Valuable
  • a sense of what’s possible and what will soon be possible

I was thinking of the above from the pov of a worker, but what about the pov of a firm or the collection of firms as a whole? What remains scarce and what does not (see dwarkesh’s post on AI firms)

  • company culture?
  • high trust relationships like in Japan?
  • the information you get through hayekian market mechanisms? Or slow-feedback world interactions
  • you can just do stuff attitude? More like risk appetite

You could reverse the above stuff by thinking all all the possible sources of alpha or beta (or wages), and just go through that list and think about each item

The above points tell you about bottlenecks for timelines and sources of profit.

What about effects like baumols cost disease or imperfect substitution that generate bottlenecks? How do they interact with this stuff? Basically try to integrate the advice on how to have a job, how AI firms will work, and max tabarrok’s post about AI and wages

Most demand will come from AI not people (what’s the empirical measure of this?), so you need to think what they will want. Think of all the organs of the body, and think about acting as that organ for an AI. e.g. eyes, ears, hands, livers/kidneys, immune system, a guarantor who can suffer or be harmed. This model suggests that imperfect substitution re AIs having faculty X 100x better but faculty Y only 2x better will lead to certain imbalances or bottlenecks that humans have to help them with, e.g. testing the 1000 new drug ideas in a lab. Of course the organ need not be human, small cheap drones could be quite useful too. Decent investment maybe. The humans of human challenge trials would be in high demand, or substitutes for them (Including different level regimes). I wonder if you could make a service to connect participants and trialers there? What’s the legal landscape I wonder. How would liability work. Could you do some innovative compensation mechanism (equity?) for the participants?

The alternative to an organ is memory, ie data. Is apple in the lead with their unified ecosystem? Plausibly idk. Think through apple give fast takeoff/AI RnD.

What are the travel agent equivalent jobs, the typist equivalent jobs, etc. just gen a list and think through

See the various startup ideas implicit in the legal ai section of the longer version of SIS. Agent infrastructure

Like the French revolution the next few years will give the chance of meteoric rises. Don’t get stuck in apprenticeships. Some institutions will rise with the tide others will break others will maintain a facade others will dissolve etc. you must choose which to join.

Markets of AI, social evolution of AI seem to be a hot thing now. See Cowen and Dwarkesh mentioning it recently. Hyperbolic pushing for that too. The multi-agent risks from advanced AI paper is the SOTA more or less.

Enforcing human in the loop is useless because of gradual disempowerment type concerns/selection effects. So you need to go up a level and build the legal/technical infrastructure that makes verification, trust, etc. easier for both humans and AI. So agent IDs, reputation schemes for trust, and zero-trust mechanisms like escrow, liability (tied to trusted people/institutions, etc.).

Elasticity of labor substitution case studies historically have all used small multiples. this works for marginal analysis. AI probably won’t. So what models do you use. See max tabarrok’s post on horses and AI. Carrier pigeons telegraphs is the other analogy.

Random thoughts about multiagent risks and policies to address them:I think enforcing human in the loop might be counterproductive and disempowering. If economic competition/selection pressures for fast decision making and action are intense then orgs/nations/institutions that enforce human in the loop might just get sidelined/pushed into a parallel fake economy/power structure. Some of the EU and UN AI governance work feels like it’s already getting outpaced in this way.

Instead it might be better to work on technical/governance infrastructure like agent IDs, robust reputation systems, (for making it cheaper and quicker to trust AI agents) or liability, escrow, etc. (for making trust less necessary for depending on AI agents). Ideally these would buy you more human control for less human-in-the-loop. (edited) 

Re the impact of AI on labour markets:

Re Z’s comments about what model do you adopt when marginal models break down, AGI-pilled people seem to like the horse internal combustion engine or carrier pigeon telegraph models. I thought this post was an interesting response to the more extreme pessimistic models. The fact people jump to thinking about nonhuman animals to predict the impacts of AI does seem to support D’s idea that you need a more first principles model of what even counts as an economic agent in the relevant sense.

Also assuming AI agents will be massively productive imperfect substitutes for human labour shouldn’t you expect most demand for human labour to come from AI firms/agents (at least eventually)? On this model you might think more general questions about human empowerment (e.g. the bargaining power/pricing power of AI agents/firms vs. humans) would be more important than questions about how to make UBI work, since high aggregate demand for human labour would persist for longer than if AIs were more perfect substitutes for human labour. 

Bostrom has a model of income sources under AGI in deep utopia

Dwarkeshs question about where are the Novel insights given the memorisation was very generative. What generator could produce more questions like that. What are things AI is much better at than humans, what’s an output of humans doing that thing, do AIs produce that output, if not why not. This should hopefully produce more narrative violations.