On predictive processing model multi-level predictor (see andy clarke) is the term which links to multiple-self stuff
Nguyen’s model explains why switching teams in board games where there aren’t quite the right amount of people takes the enjoyment out of games by making the goal structure more complex. This basically defeats the purpose of games.
I’m sure the finite vs infinite games distinction is relevant to some stuff here, but idk what. Also how is the concept different from just instrumentally vs intrinsically valuable things? Is it relevant to conditional vs unconditional desires?
Partial preferences are one part of a boxed sub-agent, what else it is made of?
In playing games we adopt (bracketed) intrinsic values with breakout-conditions, e.g. if there’s a fire we stop playing poker. What let’s humans run these fun little sub-agents with distinctive temporary intrinsic goals? Could AI do this as well?
This also ties in with the permissibility of dishonesty and cowardice in games, plus Roger Callois’ and Nguyen’s work on games.
This suggest we already fork ourselves, but in a boxed way. What are the consequences of this in human lives? Why aren’t the consequences totally terrible like in age of Em scenarios? AI Identity
If unhappy families are all unhappy in their own ways, but happy families are all alike, does the same apply to bad vs. good players of ganes? It seems like there are many less axes of badness available to people in games where the rules and objectives are clearly set. Is this merely an artefact of relative comolexity? Or could there be something relevant to IRL/CIRL/gavagai underdetermination problems and embedded agents stuff?
How do gladiatorial games fit into Nguyen’s voluntary difficulty analysis of games
See ACANs The sociality of personhood, moral knowledge, and responsibility
How do arendts (and heideggers?) Views about the incompatility of instrumental reason and human freedom/Arendtian action fit with Nguyens views about games (and their instrumental reasoning) work to enhance human freedom?
Games: Agency as Art notes Games are our library of agencies, and communicate agencies.
Suits account
- The aim of a game is only conditionally valuable, conditional on achieving it within the voluntary constraints. In striving play we choose the end for the sake of the means it forces on us.
In stupid games, where the fun part is failing, but we have to genuinely try to succeed in order to have fun (twister), striving play is demonstrated.
- Art appreciation may be another kind of striving game Aesthetics and disinterested liking?
Striving play shows we can take on disposable ends. To do so we must suspend our overarching goals Topics in Contemporary Epistemology We do this to take on new forms of agency, like the agency of deceit in Diplomacy. This lets us become more free by teaching us new ways to inhabit our own agency. It’s like yoga in that prescribed forms are learnt then they enable greater freedom. Like yoga, playing games can encourage a certain light-footedness of agency.
Multiplayer games especially are closer to legal structures and urban planning than narratives. They shape affordances, rules, incentives.
Games let us see the aesthetic of beautiful action that rarely emerges in everyday life. This is because the game world and game agents can be engineered to be beautifully harmonious. The beauty of games is in the processes they prompt, not the games as objects.
Clarity of values in games can lead to value capture of complex ends in the real world. Sacredness (Seems to be basically just Goodharting?)
Games - The Art of Agency (2019 Article)
There are suitsian games of voluntary obstacles and rules, and make-believe play, which Suits doesn’t capture.
Note that outside of striving games we don’t actually try to maximise the achievement of the lusory/pre-lusory goal, e.g. if it would make the even competition in the game unbalanced.
How can playing suitsian games be disinterested in the Kantian sense? Aesthetics and disinterested liking
- Because the player is interested with respect to the lusory goal, but the larger agency is disinterested as to whether this sub-agency achieves it ends, it only wants it to compete, to exist, to strive. It wants the sub-player to be, not an end in itself, but something that has ends. Inversely, the sub-player doesn’t care about the experience of playing or striving, it may only care about winning. But for the larger self to experience the striving, it must summon a player that doesn’t care about the experience, and only strives. In this way the subjecthood of the larger player exceeds it’s agency, it’s agency doesn’t reach within the bounds of the game, but it’s subjectivity does Honesty and Poker
The example of stupid games is important to show that we don’t simply blot out most of our existing goals, leaving us only aware of the few that remain, but that we actually gain new ones, since the goals of stupid games aren’t ones anyone has outside these games.
Contra Millgram’s picture of agential unity as a singular achievement, characterised by the ability to weigh different ends up in chains of practical reasoning, to make them commensurable, the fluidity of game playing agency is actually an achievement, and the ability to bracket off agents and their ends from commensurability is likewise an achievement. Moral Offsetting
Important to note that bracketed agency screens of awareness of the justification of actions (the higher ends that justify our playing the game), but not the actual justification of ends.
- An example would be loving for the sake of your own pleasure, this would actually be justification-effacing, since you wouldn’t actually be in love. The justification of loving → pleasure can’t appear anywhere in the chain of justification without practical contradiction, screening off awareness doesn’t help. In this way the pleasures of love are justificationally self-effacing (Michael Stocker 1976) Love in the Time of Teletransporters
- This distinction between awareness-of-justification effacing and justification effacing ends is important to the limits of the goals a sub-agent can have while still playing the game. You can’t play the game of love for the end of pleasure, because a ‘love’ that’s sensitive to considerations of pleasure isn’t actually love, since love is partly defined as responsiveness to the needs of another. This modal responsiveness condition is the limit criteria I was looking for before in Topics in Contemporary Epistemology and Philosophy of Language and Mind. So what I need to define is a kind of game, either a language game or an inquiry game where part of either the aim or the rules is to become/be modally sensitive to some state of affairs, and becoming/being sensitive to this SoA then puts restrictions on what kind of other goals the agent can have and still count as engaging in the game. For the agent to still have the main goals, presuming the main goals are possibly in conflict with the game goals, there must be a possible situation where the main goal is so severely violated by the game goal that the game goal is dropped. For example, the house fire stopping you playing a game, or even caring about the sub-goal. It actually cancels the reasons for achieving the sub goal, it doesn’t just outweigh them.
- This criteria is forward-looking in the space of reasons, might there be a backwards looking one as well?
- This distinction between awareness-of-justification effacing and justification effacing ends is important to the limits of the goals a sub-agent can have while still playing the game. You can’t play the game of love for the end of pleasure, because a ‘love’ that’s sensitive to considerations of pleasure isn’t actually love, since love is partly defined as responsiveness to the needs of another. This modal responsiveness condition is the limit criteria I was looking for before in Topics in Contemporary Epistemology and Philosophy of Language and Mind. So what I need to define is a kind of game, either a language game or an inquiry game where part of either the aim or the rules is to become/be modally sensitive to some state of affairs, and becoming/being sensitive to this SoA then puts restrictions on what kind of other goals the agent can have and still count as engaging in the game. For the agent to still have the main goals, presuming the main goals are possibly in conflict with the game goals, there must be a possible situation where the main goal is so severely violated by the game goal that the game goal is dropped. For example, the house fire stopping you playing a game, or even caring about the sub-goal. It actually cancels the reasons for achieving the sub goal, it doesn’t just outweigh them.
Skepticism about disposable ends (Basic response is that game-ends are directly action guiding in a way fictional, make believe, and occurent instrumental ends are not. They’re just action guiding in a cancellable, or not-fully commensurable way, unlike regular ends.)
- Are you just pretending to have certain ends?
- No, because failing to achieve your game goal is an actual failure for you, unlike in cases of pretend love where their wellbeing isn’t a failure for you since it isn’t actually your goal.
- Are they make believe ends, like make-believe fears? These are make believe because you don’t actually do the appropriate responses to the supposed attitude, like fearing a werewolf in a movie (e.g. you don’t actually run away or call the cops). Instead you imagine a fictional world inhabited by your fictional self where the fictional self is actually in danger, and actually fears. Walton explains the quasi fear is a prop that lets you generate fictional truths (the presence of bears) from real world truths (like the presence of stumps) according to a generation rule. In this way the fictional truths aren’t simply voluntary, even though the rule of generation is. {This is exactly the answer to the basic puzzle about competitive cooperation in Philosophy of Language and Mind See Nguyen’s 2017 paper on the topic for a more developed picture of this.} With make believe ends, their actions are actually regulated by the game goals. e.g. of feeling thrilled, not the fictional goals of e.g. avoiding bears. Their genuine ends will even make them act in ways that lead them to imagine their make believe ends are being frustrated.
- Games may involve these, but they also involve real but bracketed goals, like winning the game, (which the fictional self wouldn’t have since it doesn’t know it’s in a game). There’s also plenty of games that don’t seem to involve make-believe, like basketball. There are also some games where you’re meant to make your character comically fail to meet their ends (your make believe ends)
Millgram argues desires involve two inferential commitments, forward directed ones to the inferences I’m committed to make from a desire, and backward directed ones about the origins and reasonableness of the psychological states from which my desires will proceed. This means instrumentally supported desires, where you acquire them bc. having the desire is good (contra constitutive reasons to desire based on actual desirability of the thing) can’t justify your forward directed commitments to e.g. pay extra for moon roofs, and hence they aren’t actual desires. The Alignment Problem and AI safety notes Holton’s Intention as a Model for Belief
- This forgets some instrumentally-supported desires are desirable to merely have, while others are desirable to act from (as in striving play), and the second ones really are inferentially embedded, and hence real desires.
Games make the proper function of objects clear in a way it isn’t in real life, and this makes displaying and recognising the beauty of proper function easier.
Nguyen mentions striving play may have implications for Parfit’s state given reasons, Velleman’s guise of the good, and Korsgaard’s self-constitution.
There’s also the role of indeterminacy in games, which of course is present in live performance. But I guess it can be more precisely baked in by the authors of games. I guess there is choice and there is chance. CYOA books have choice but no chance (though they could be modified to have chance - which would suck).
Oh how does Conway’s Game of life work as a test cases for these theories? I don’t really give a gaf if they exclude it ofc