Humanism needs an upgrade... Sentientism?

Hi,

I’ve just had this published in Areo Magazine and thought it might be of interest here Humanism Needs an Upgrade: The Philosophy that Could Save the World - Areo

It focuses on sentientism, a little known philosophy that, like humanism, applies evidence and reason. It then extends humanism by granting degrees of moral consideration to all sentient beings. That’s non-human animals, but also potentially artificial or even alien intelligences.

Let me know what you think. It resonates strongly for me so I was surprised to find so little public discussion on the topic.

Thanks,

Jamie.

It was a very interesting article, and I also enjoyed the reading the comments. I can certainly imagine someday in the future in a better world (say the 24th century, Star Trek’s time), a society might exist that might apply rights to all animals capable of experiencing suffering. But in the here and now, I really can’t imagine it happening. A couple of weeks ago I was walking in a park where I live and happened to see a small doe crossing the path in front of me. She looked up and for a moment our eyes met. She froze, and I tried to act as if I hadn’t seen her. But in a moment the magic was broken and she darted off into the woods. I said to her, “That’s right, baby. Run as fast as you can. Humans will kill you if they see you.” I wish it didn’t have to be this way, but it is.

To answer your question, no I don’t think Humanism needs an upgrade yet. Religious people believe that Humanists worship themselves, and we’re having a hard enough time getting that idea out of their heads. Can you imagine trying to explain that Sentientists don’t worship animals?

Thanks Advocatus - glad you found it interesting.

I acknowledge the challenges re: persuading people and I’m happy to alter tactics to respond. However, I’m not willing to let those challenges undermine the philosophy itself unless the challengers have good counter-arguments.

I’d rather “worship” sentient beings or humans than something that doesn’t exist. However, neither humanism nor sentientism mentions worship - all we need to do is grant moral consideration and show some compassion. To my mind sentience (the ability to experience) is a better determinant for things that warrant compassion than any species boundary.

Yeah it’s wonderful sentiments (I also read it), but people are wrapped within other world. No one wants to change what they believe. Interest in constructive challenging dialogue seems near nonexistent. and frankly we seem to have become more comfortable talking past each other.

Jamie, after reading your article I got to thinking you might find something interesting in this one:

Missing Key to Stephen Gould’s “Nonoverlapping Magisterium”

https://centerforinquiry.org/forums/topic/missing-key-to-stephen-goulds-nonoverlapping-magisterium/

 

If you do, let me know.

Since I’ve mentioned Star Trek, I think it’s interesting that your definition of “sentient” wouldn’t seem to apply to the android, Data, who appeared in that series. He was intelligent, capable of learning from his experiences, and obviously self-aware, but because he didn’t have emotions, he didn’t seem to “suffer” in either a physical or a mental sense. There was one episode in which he was under the control of his evil twin, Lore, and was forced to torture his best friend, Geordi Laforge. When he had returned to normal, he realized that what he had done was morally wrong, and he was very sorry, but only in an intellectual way. Even then he didn’t seem to “suffer” over the realization that he had come very close to killing his friend.

Of course I’m certainly not basing any argument on a fictional story, but it does make me wonder how “sentientism” would apply to artificially intelligent computers and in some cases to animals. Suffering is not just physical but mental. Has anybody ever asked a horse if it LIKES being ridden?

Thanks Citizenschallenge. It can feel frustrating given how deep dogma runs in most minds - but there’s cause for optimism too. You don’t have to look back many decades to see how far our default ways of thinking / moral frameworks have improved. There’s a long way to go, but change is happening every day. Also - people die and new people learn fast… It’s interesting to consider what future generations will think of us and what they’ll condemn that we now think normal. I suspect they’ll have a philosophy that looks a lot like sentientism.

I’m with you in finding Gould’s separate magisteria unsatisfying (or less politely - bullshit). It’s a massively complex exercise in dodging difficult and important questions. I prefer your split - but even there, our mindscapes are explicitly and clearly just another part of the wider real universe. Our minds are fascinating and special - but they’re not separate from reality.

Thanks Advocatus - Interesting thought experiment. Sentience is the ability to experience subjectively - In its broadest sense Data does have that quality. I have been asked whether, to qualify for moral consideration, sentience would have to have some sort of “hedonistic tone” - some positive or negative quality - suffering or flourishing. If Data experiences things but never feels anything positive or negative - being perfectly neutral to all experiences - could we argue we don’t need to grant him moral consideration given he can’t suffer or feel joy? My sense is that suffering / flourishing is more morally important than purely neutral “experience”, but I’d still grant neutral experience some moral worth. In practical terms, I also suspect that any sufficiently advanced artificial intelligence would need to incorporate positive / negative experience tones to operate. Thoughts welcome…

That’s a very good point about positive/negative feedback. In spite of his protests that he doesn’t “feel” anything, Data did admit that would grow accustomed to certain inputs from the people he was acquainted with, and missed them when they were absent. In fact he grew quite fond of his pet cat. “The ability to experience subjectively” seems a pretty good definition.

Humans are merely evolved animals. We have not evolved terribly far past the other animals. Yes, we have some unusual abilities, but we cannot grant the claims that we have been brought up with culturally in a post Christian upbringing. The confrontation of the false assumptions that we have been informally trained upon is an important part of getting to the truth. In other words we need to get back to ground level away from the egotistical claims of human superiority. Our linguistic and technocentric complexity is impressive, but it is as much an accumulation over the past as it is an innate ability. For instance we still celebrate Newton’s inverse square law of gravity, but how many will recover it independently? Our ability to sop up inaccurate falsehoods is every bit as great as our ability to find the truth, and this is deeply problematic. Scientific humanism will be a cut down form of the modern presumption, but if this is accurate then the step should be taken. The human as a programmable entity poses a challenge to those who believe that AI ought to mimic the human. I’ve had some very unimpressive conversations with humans. Is that really the standard? Obviously it is not, and the Abrahamic religions are fine instances of programming run amuck; exclusive belief systems that insist on their propagation. Without confrontational energy there will be no movement. I do think that there is a softer side which explains the development of a belief in god as well as superstitious beliefs, but these details include the study of the weaknesses of wee humans. Mimicry rules this day and explains most of what is going on. It is a good topic but there is great cause for doubt.