

Why do we need that discussion, if it can be reduced to responsibility?
If something can be held responsible, then it can have all kinds of rights.
Then, of course, people making a decision to employ that responsible something in positions affecting lives are responsible for said decision.
That being possible would be fundamentally a level up from what they are now. I’ve read a paper on this someone linked in a Lemmy thread a year or so ago.
I think a more manual approach would work, of a world model like Crusader Kings has, with traits and ties and opinions and random events of NPCs between each other and towards the player, and that AI being used simply to rephrase and slightly adjust descriptions and sequences of events - then maybe.
But consider how many NPCs that means and how many others they meet in their simulated lives, and how hard it would be to debug a story line to ensure that it’s always playable.
An LLM is not, strictly speaking, necessary here, and if used, doesn’t make it easier.