It’s mostly premised on the idea of whether an AI simulating a person would be unethical to terminate and they seem to say it’s not because it’s a mimic of selfhood instead of an actual one.
Though to me this seems to suggest something special about brains when, IMO, we don’t really know if a brain is essentially necessary for consciousness.
If you define it as “simulating” then you’ve defined away the problem. Another consideration is that with AI, to “terminate” is just to turn off and can always be followed by turning on again. Or was the question was about erasing data and algorithms? I think you need to give the AI an amygdala or something emotional, so that it cares about being erased, in order for there to be something ethical.
It was sorta attempting to argue that AI cannot be consciousness because there is no boundary or defintion between it and everything else.
The thread mentioned it starts with the “undeniable reality” of the self, and I’m just thinking “these people really didn’t read any Eastern philosophy huh”….