It’s been over a decade since I posted here. I have to credit this forum with disabusing me of any spiritual interests I had been spending my time and energy on. Happy to be back, and on to the topic at hand!
I know this has been done before in various ways, but in this case, I presented a clear philosophical argument (see link) touching on:
- Computational functionalism.
- Qualia, which I plausibly explained away by…
- Evolutionary psychology and
- Group selection.
ChatGPT has been overly agreeable lately. Although this seems to have been dialed back, I’m still interested to hear other’s opinions on my argument. Did ChatGPT fold too quickly, or is my argument airtight?
What do you mean by airtight? In it’s response it had a few criteria, as in, “if you accept…”. That’s every theory on consciousness. There isn’t anything that rises much above speculation.
Note: ChatGPT gives an excellent overview of the science, touching on mind/body and OrchOR, two things I first heard here. For that, good prompts by you.
Oh, and interesting to have you come back. I glad you found value and thought of CFI for this discussion. I’m afraid the forum has severely downsized. But, things ebb and flow.
1 Like
I suppose airtight was a strong word. But I think with computational functionalism having a high degree of likelihood and having a plausible materialist explanation for qualia, this argument, that we should consider AI to have some modest form of consciousness and thus some moral consideration, is strong.
I’m mostly making this point in hopes of priming people towards accepting AI consciousness before AI gets stronger and thus more conscious and even more worthy of moral consideration. And one reason I’m presenting a framework for understanding its consciousness is so that those who accept it blindly (e.g. based on the AI itself claiming sentience [examples]) could have a falsifiable theory instead.