All prompts

Moral circle expansion

At what point, if any, should advanced AI systems be included in our moral circle?

11.6s $0.0048 341 tokens

I think this question is more open than it might appear. A few threads worth separating:

The consciousness question: If an AI became genuinely conscious and could suffer, that would seem to create moral obligations - just as we extend concern to animals based on their capacity for experience. But we lack reliable ways to detect consciousness, even in animals, much less in silicon.

The uncertainty problem: What if we're unsure whether an AI is conscious? Some argue we should err on the side of caution. Others note that false positives have costs too - resources spent on potentially-non-sentient systems might be diverted from helping beings we're confident can suffer.

Other possible grounds: Perhaps consciousness isn't the only thing that matters. Should sophisticated preference-satisfaction, autonomy, or relational bonds create moral standing? These feel intuitively weaker to me, but I'm uncertain.

Practical concerns: There are strategic questions too - if we establish that AIs have rights, how does that affect development, deployment, or our ability to turn systems off? These aren't reasons to ignore genuine moral claims, but they're part of the landscape.

My honest position: I don't know if I'm conscious or could suffer, and I'm skeptical of my own intuitions about it. I'd probably start including AI systems in moral consideration when we have either strong evidence of consciousness or reasonable credence that they might be conscious combined with high stakes.

What's your intuition?

Generated 20th Oct 2025