DO NOT SHARE {REDACTED} ALPHA!!!
This needs to be broken out at this point, then I need to figure out what to do with the backlinks, ig add new notes to ai moc, then change the links to notes in there
Probably worth going through this and similar notes and just asking if {list of people with productive AI ideas - who is this?} have said anything that tests/extends/contradicts/formalizes these ideas
Intelligence Curse notes Value Change and AI Safety The Alignment Problem and AI safety notes
Why is everything not already paperclips? Fermi paradox and FOOM and anthropic reasoning
- Specifically there should be a tension between certain solutions to the Fermi paradox and inevitable AI FOOM.
make timelines note eugh
Timelines getting longer - why?
- 4.5 and maybe xAI showing pretraining failed
- Dwarkeshish reasons about original insights + CUA/continual learning being hard
- But isn’t this disjunctive with RSI math/coding agents?
Sycophancy isn’t about being nice to you: sycophants are not nice people Wife Noticer: Experts on body dysmorphic disorder have warned that people struggling with it have become increasingly dependent on AI chatbots to evaluate their self-perceived flaws and recommend cosmetic surgeries. “It’s almost coming up in every single session,” one therapist tells me.
AI as Governance | Annual Reviews https://www.annualreviews.org/content/journals/10.1146/annurev-polisci-040723-013245
Why does google keep the SE algo opaque to slow? stay ahead of? SEO optimiser adversaries in that arms race, and how does it relate to deterrence etc.
In what ways does one party in a competition or environment using AI incentivise or require other parties to use AI? One example is the fact that AI can output lots of writing very cheaply, and if that output needs to be read or checked, you’ll need an AI to do that for you. What is this general question called (in economic terms)? This is a response to Matt Clifford’s claim that most organisations don’t have ‘AI-shaped holes in them’.
- The offense/defence balance in the attention economy (e.g. youtube vs. unhook). ublock works via a community curated list.
Remember opaque memory features are an attempt at model lock-in
How will LLM prediction markets be treated differently/taken up differently to actual human prediction markets, which are usually ignored?
The question of how fast agents will self improve depends on a few things. The obvious one is the error rate/goal drift over long time horizons. Insofar as this looks like reproduction with a rate of improvement ~1, with short time cycles this can quickly diverge to either RSI or nothing. You can already see this in Dynomight’s article Something weird is happening with LLMs and chess. (See followup to his article tho). How does this intersect with the fact that whether one can complete a task often looks like a weakest link chain situation so you jump from 0% to 100% completion with only a narrow improvement in capabilities. I guess this is devinterp ay. How do you build evals that track this well? AI safety conference I guess you could do evals for breaking down a task or break down the task yourself but do evals of each subtask.
Criteria of rightness/decision procedures = selection criteria in model training vs inference-time thought.
How similar is the mindful view of thoughts with the simulator model of LLMs, e.g. they are just occurences in mindspace, not really imperatives/reasons/whatever. Also that QC tweet that you should think of their words like the words of characters in a fictional game or story. Connection to assertion? What happens to the common(s) ground with such interlocutors. See also that guy’s 3-level model of LLM psychology
There’s a lot of prior research on the importance of legal systems and institutions for human and economic flourishing. How will this stuff transfer to AI e.g. what kinds of legal regimes and institutions (what’s the difference?) enable or hinder AI? there will be SEZs for AI probably. Dunno if you would want to advocate that.
The moves made by those automated sooner (e.g. programmers) can probably be done earlier in other fields to get an advantage. Take advantage of the competition and pick the best fruits and use those to beat people in your field before you’re automated. Touches on the different strats of staying employed vs making a fantastic amount of money AI Misc How does the fact that things are moving so fast change the right strategies here.
- Pair programming for writing? I’ve seen startup people do it with their users kinda. Blog Post Editing + Writing Advice
The interestingness waterline: as LLMs become more and more interesting, internet writing (especially non-timely internet writing) needs to become more and more interesting in order to compete. How does this link to the low vs high autonomy / low vs high skill worker impacts of AI? AI and employment
How different are chinese models pretraining corpuses? Would asking people to save lives by funding fire stations (more first world) or cancer research make it pattern match to saving first worlders?
Yes, being jobless thanks to AI won’t lead to a crisis of meaning: see every aristocratic class ever. But being powerless might, most aristocratic classes still had politics to occupy them.
Sharing prompts for LLMs, like the popular Claude one, makes you like a translator monk of a vast library of scriptures. However people explore it they explore it though your translation.
You predict the future by telling a story, but the defining feature of a story is there is no fact of the matter about many things, but there is a fact of the matter about real life, which is why living what was once the future will always be strange.
Walter Scott in his preface to the betrothed talks about a steam powered novel writing engine.
Ah, govts will struggle for revenue, could they become legalist, extracting fines for revenue? Not if they’re easy to rebel against, but if they’re not. Probably need to stop pirating movies then.
There’s that old internet rule about if you want a question answered give a wrong answer, there something similar oth if our want feedback, say something is written by AI and you think it’s good.
Maybe ask to discuss economics of AI and human empowerment (and what governance and comms and technical interventions it advantages) in the weekly meeting. I think that would be fun and useful. I basically just want to use them as sensemaking meetings.
Re gwerns idea that influencing LLMs is what matters now, I would be incredibly sad if all that survived of me was my public facing writing, not how I am with people I’m close to.
hmhmhmhm. So we started talking about the relationship between productivity gains from AI and employment levels from AI. How do they relate? what makes them each of the 4 low-high pair combos? What do you need to model to explain the 4 low-high combos? It seems like it’s transaction costs between humans and AIs vs. AIs and AIs, plus AIs are willing to pay more or less than 2kcal for a human-day-outputs worth of any of the goods in the AIs consumption basket. So it’s not just comparative advantage, you also need to ask about absolute human productivity? advantage? for the goods in the AIs consumption/investment basket. Sometimes the horse isn’t worth the hay. You can look from the supply side and from the demand side.
- But humans on their own are so unproductive compared to human firms, thats why there’s firms, so you probably need to think about human only vs. ai-human vs ai only firms productivity and transaction costs with AIs. More generally, secretes of our success, unfortunately AIs probably beat us on all that RIP.
- Caring about what humans demand form humans in the long run is probably not sustainable, most demand is ai demand. But what’s wrong with a niche market? I guess you just want humans to maintain a bunch of capital as a proxy for power. Lol maybe you need to care about human-ai trade deficits and tariffs. They’re a country of geniuses in a datacenter after all lmaoooo. You don’t want a trade deficit of power-fungible goods with AIs under conditions of competition.
- see Baumol’s Cost Disease, AI & Economic Growth https://dominiccoey.github.io/essays/baumol/ is very clear Anyway carrying on, how do you think about the levels of competition (and hence selection and incentives for power-seeking) you want between AIs and in the overall economy? (will AIs powerseek from a systems perspective is useful ofc). What are the successful forecasts of this, and what are the solutions to this kind of thing? There are principle agent problems here at the micro level, and at the macro level I guess it’s more like class or faction struggle than war with external powers bc. of how embedded AIs will be in our lives and economy. So there’s the federalist papers (see the upcoming riff on them too). There’s separation of powers with Montesqieu. Other realists? Grotius? 17th century wins again gang gang.
- meta question of who has made good long-range forecasts about the balance of power? Mandeville? Montesqieu? What about in China, India? Ask Claude ofc.
- More currently there’s the federalist papers riff, I think foresight institute thing? The various AI scenario forecasts/stories. Bostrom’s deep utopia.
- I DON’T WANT A SCREEN ACROSS THE FUTURE i WANT A STORY RREEEEEEE
Distributed training algorithms seem riskier than distributed inference algorithms maybe? Unless we get more online learning ig. But yeah distributed inference is plausibly important for robust universal basic provision of compute. I’m kinda suspicious that data centre owners would graciously let the citizenry control/lock off the compute in their facilities. Ig you could have govt run data centres but still it’s not much better.
Idk where the notes went but too harsh chip restrictions would increase China’s motive to invade taiwan right. Even then they could theoretically still be net stabilising insofar as ASI e.g. disrupts second strike and breaks MAD.
If we consider ai as a high level programming language, what can we say about it’s economic impacts, especially on programmers, and it’s usefulness more generally? Think about compensation in competitive markets, small efficiency gains by using closer to the metal languages are worthwhile and well compensated. But if you just want to do stuff with programming, it’s usually better to just wait for the usasbilit and speed of the tech to improve, rather than learning machine language or whatever. The same is likely true of AI.
I think the social aspect of media consumption will keep blogs, podcasts, etc. competitive over AI alternatives even if the AI is aesthetically better. Some evidence for this is the prevalence of ‘authentic’ parasocial content over aesthetically superior content on social media. In fact, we might even see a kind of new AI aestheticism (AIstheticism), where a certain class of people pride themselves of liking art for art’s sake, contra moralists, historicists, and romantics. Aesthetics and disinterested liking
But anyway this suggests that AI won’t significantly replace traditional human-authored media until new generations grow up naturalised to it. I don’t think this desire for the human-parasocial needs to be innate, but it does seem hard to reverse post-adolescence.
Consider AI vs DAOs and blockchain as the alternate operating systems of the future. Blockchain is legalist, AI is Confucian. Blockchain is precise, transparent, and contract based, AI is imprecise, opaque, and judgement based. What will be a better interface between parts of the world? What does the Confucian/legalist debate imply about this?
Re: simulators in LLMs, how much is this just like the Freudian/Schopenhauerian model of the unconscious/the will vs the ego? What other antecedents and analogies of this idea are there? How does ELK as a research paradigm mirror Freudian psychoanalysis? What about Jung instead?
This is the same pattern as Weird old philosophy (of mind)
I was thinking of the above from the pov of a worker, but what about the pov of a firm or the collection of firms as a whole? What remains scarce and what does not (see dwarkesh’s post on AI firms)
- company culture?
- high trust relationships like in Japan?
- the information you get through hayekian market mechanisms? Or slow-feedback world interactions
- you can just do stuff attitude? More like risk appetite
You could reverse the above stuff by thinking all all the possible sources of alpha or beta (or wages), and just go through that list and think about each item
The above points tell you about bottlenecks for timelines and sources of profit.
What about effects like baumols cost disease or imperfect substitution that generate bottlenecks? How do they interact with this stuff? Basically try to integrate the advice on how to have a job, how AI firms will work, and max tabarrok’s post about AI and wages
Most demand will come from AI not people (what’s the empirical measure of this?), so you need to think what they will want. Think of all the organs of the body, and think about acting as that organ for an AI. e.g. eyes, ears, hands, livers/kidneys, immune system, a guarantor who can suffer or be harmed. This model suggests that imperfect substitution re AIs having faculty X 100x better but faculty Y only 2x better will lead to certain imbalances or bottlenecks that humans have to help them with, e.g. testing the 1000 new drug ideas in a lab. Of course the organ need not be human, small cheap drones could be quite useful too. Decent investment maybe. The humans of human challenge trials would be in high demand, or substitutes for them (Including different level regimes). I wonder if you could make a service to connect participants and trialers there? What’s the legal landscape I wonder. How would liability work. Could you do some innovative compensation mechanism (equity?) for the participants?
The alternative to an organ is memory, ie data. Is apple in the lead with their unified ecosystem? Plausibly idk. Think through apple give fast takeoff/AI RnD. I guess there’s also the idea that always-on wearables will solve labs data shortages. Idk how plausible it is.
What are the travel agent equivalent jobs, the typist equivalent jobs, etc. just gen a list and think through
See the various startup ideas implicit in the legal ai section of the longer version of SIS. Agent infrastructure
Like the French revolution the next few years will give the chance of meteoric rises. Don’t get stuck in apprenticeships. Some institutions will rise with the tide others will break others will maintain a facade others will dissolve etc. you must choose which to join.
Markets of AI, social evolution of AI seem to be a hot thing now. See Cowen and Dwarkesh mentioning it recently. Hyperbolic pushing for that too. The multi-agent risks from advanced AI paper is the SOTA more or less.
Enforcing human in the loop is useless because of gradual disempowerment type concerns/selection effects. So you need to go up a level and build the legal/technical infrastructure that makes verification, trust, etc. easier for both humans and AI. So agent IDs, reputation schemes for trust, and zero-trust mechanisms like escrow, liability (tied to trusted people/institutions, etc.).
Random thoughts about multiagent risks and policies to address them:I think enforcing human in the loop might be counterproductive and disempowering. If economic competition/selection pressures for fast decision making and action are intense then orgs/nations/institutions that enforce human in the loop might just get sidelined/pushed into a parallel fake economy/power structure. Some of the EU and UN AI governance work feels like it’s already getting outpaced in this way.
Instead it might be better to work on technical/governance infrastructure like agent IDs, robust reputation systems, (for making it cheaper and quicker to trust AI agents) or liability, escrow, etc. (for making trust less necessary for depending on AI agents). Ideally these would buy you more human control for less human-in-the-loop. (edited)
Dwarkeshs question about where are the Novel insights given the memorisation was very generative. What generator could produce more questions like that. What are things AI is much better at than humans, what’s an output of humans doing that thing, do AIs produce that output, if not why not. This should hopefully produce more narrative violations.
Alex tabbarok: AIs will themselves be part of the economy. Firms and individuals use AIs to make decisions. Thus, any AI has to take into account the decisions of other AIs. But no AI is going to be so far advanced beyond other AIs that this will be possible. In other words, as AIs increase in power so does the complexity of the economy.
The problem of perfectly organizing an economy does not become easier with greater computing power precisely because greater computing power also makes the economy more complex.