Article in New Morality section in the Web magazine “Evolution: This view of life"

I see it has been over a year since I last posted here about understanding morality as an evolutionary adaptation, that is, the science of morality.
Anyway, I thought CFI members interested in the philosophy section would be a likely audience for a piece I wrote titled “Would Abandoning Moral Foundations Make For A Better Society?” which compares some recent work and comments of Stephen Pinker and Jonathon Haidt on morality and moral foundations.

It is posted in a new Evolution of Morality section at the Web magazine Evolution: This View of Life. David Sloan Wilson (the evolutionary biologist) is editor.
My goal for my Morality section is to promote the understanding of morality as the product of biological and cultural evolutionary processes.
Comments and suggestions for new topics or good articles to link to would be appreciated.
Best regards,
Mark Sloan - Associate Editor, Morality Section

I see it has been over a year since I last posted here about understanding morality as an evolutionary adaptation, that is, the science of morality. Anyway, I thought CFI members interested in the philosophy section would be a likely audience for a piece I wrote titled "Would Abandoning Moral Foundations Make For A Better Society?" which compares some recent work and comments of Stephen Pinker and Jonathon Haidt on morality and moral foundations. http://www.thisviewoflife.com/index.php/morality/index.php/ It is posted in a new Evolution of Morality section at the Web magazine Evolution: This View of Life. David Sloan Wilson (the evolutionary biologist) is editor. My goal for my Morality section is to promote the understanding of morality as the product of biological and cultural evolutionary processes. Comments and suggestions for new topics or good articles to link to would be appreciated. Best regards, Mark Sloan - Associate Editor, Morality Section
Yes, that was food for thought thank you Mark. My key idea is that a lot of "bad" moralising is linked to belief in libertarian free will so I think research into that would be useful. Stephen
Yes, that was food for thought thank you Mark. My key idea is that a lot of "bad" moralising is linked to belief in libertarian free will so I think research into that would be useful. Stephen
What is "libertarian free will"? Whether we actually have free will (in the common sense) or not seems to me best understood as a subtle problem in physics, not moral philosophy. I don't see the relevance of the status of free will to what moral codes societies ought to enforce. Societies advocate and enforce moral codes to increase the benefits of living in those societies, such as increased material goods and psychological goods. That we might not have free will is irrelevant to what moral norms are most likely to achieve those benefits. That is, the moral justification of cultural norms comes from their consequences such as increased overall well-being in the society. Subtle questions about physics are irrelevant.
Yes, that was food for thought thank you Mark. My key idea is that a lot of "bad" moralising is linked to belief in libertarian free will so I think research into that would be useful. Stephen
What is "libertarian free will"? Libertarian free will is the belief we could have done otherwise without anything out of our control being different. Since somethings out of our control would have had to be different for us to have done otherwise, we are merely lucky or unlucky how those things turned out.
Whether we actually have free will (in the common sense) or not seems to me best understood as a subtle problem in physics, not moral philosophy.
The common sense is as I described and there is nothing subtle about it. It's the denial that people are merely lucky or unlucky to get the will they have and rather that they can overcome this luck somehow.
That is, the moral justification of cultural norms comes from their consequences such as increased overall well-being in the society.
Right, and the point is belief in libertarian free will is a different justification, which is why it can be harmful. Anyhow it was not my intention to get into a debate about it, rather a suggestion for research into what affect the belief is having. And it seems akin to or perhaps it just is belief in foundational morality, just another way of putting it perhaps. Stephen
Yes, that was food for thought thank you Mark. My key idea is that a lot of "bad" moralising is linked to belief in libertarian free will so I think research into that would be useful. Stephen
What is "libertarian free will"? Libertarian free will is the belief we could have done otherwise without anything out of our control being different. Since somethings out of our control would have had to be different for us to have done otherwise, we are merely lucky or unlucky how those things turned out.
Whether we actually have free will (in the common sense) or not seems to me best understood as a subtle problem in physics, not moral philosophy.
The common sense is as I described and there is nothing subtle about it. It's the denial that people are merely lucky or unlucky to get the will they have and rather that they can overcome this luck somehow.
That is, the moral justification of cultural norms comes from their consequences such as increased overall well-being in the society.
Right, and the point is belief in libertarian free will is a different justification, which is why it can be harmful. Anyhow it was not my intention to get into a debate about it, rather a suggestion for research into what affect the belief is having. And it seems akin to or perhaps it just is belief in foundational morality, just another way of putting it perhaps. Stephen Honestly, I don't see that belief or disbelief in free will has much effect on moral behavior. Perhaps to clarify, I see whether or not we actually have free will to be a subtle problem in physics which, as yet, has not been answered. But even if science comes to the conclusion we do not have free will, I don't see that as having any cause for concern except perhaps among some philosophy students. Groups will continue to punish immoral behavior to the extent it is in the group's interests to do so.
I see it has been over a year since I last posted here about understanding morality as an evolutionary adaptation, that is, the science of morality. Anyway, I thought CFI members interested in the philosophy section would be a likely audience for a piece I wrote titled "Would Abandoning Moral Foundations Make For A Better Society?" which compares some recent work and comments of Stephen Pinker and Jonathon Haidt on morality and moral foundations. http://www.thisviewoflife.com/index.php/morality/index.php/ It is posted in a new Evolution of Morality section at the Web magazine Evolution: This View of Life. David Sloan Wilson (the evolutionary biologist) is editor. My goal for my Morality section is to promote the understanding of morality as the product of biological and cultural evolutionary processes. Comments and suggestions for new topics or good articles to link to would be appreciated. Best regards, Mark Sloan - Associate Editor, Morality Section
Whose morality? LL
Whose morality? LL
The Morality section reports on current progress in understanding the origins and function of moral behaviors as 1) motivated by our 'moral' biology such as that underlying our emotions such as empathy, loyalty, shame, guilt, and indignation and 2) advocated by past and present enforced cultural moral codes. Much of that work is described in the literature as the being on the evolutionary origins and function of cooperation and altruism. This work is concerned with what moral behaviors 'are', not what they 'ought' to be. While this science has implications for philosophical moralities, such as the best 'means' for achieving Utilitarian goals, it, like the rest of science, is silent regarding what our goals 'ought' to be., including our goals for enforcing moral codes (the main point of philosophical morality). If that is not what you were asking, you might clarify your question.
Whether we actually have free will (in the common sense) or not seems to me best understood as a subtle problem in physics, not moral philosophy.
No. The question if we have free will or not has not much to do with physics. Only when it would turn out that our behaviour is ruled by quantum randomness, we could say we have no free will. The hints however go in the opposite direction: quantum randomness can be neglected for human behaviour. For moral philosophy free will is not a problem: quite the opposite, it is a necessary presupposition. Morality without humans that can choose to behave according to them does not make much sense. The only way that free will can play a role in morality is the question if there are exemptions of punishment in case of people that are not capable of moral deliberation. But then we recognise immediately what free will means: being a moral agent. And that has nothing to do with physics (or genetics). Concerning the main topic of your article: yes I think that the principle 'increasing flourishing and reducing harm' should be the basis of every form of morality. However, cultures differ, and so the more specific peculiarities of morality may also differ. But I would like to add 'for everybody, human or animal, on the long term' to the principle. Very often morality is only applied to the own group, excluding people with a different cultural background or other value system. But that is then exactly in this second set of three 'moral foundations' of Haidt.
Honestly, I don't see that belief or disbelief in free will has much effect on moral behavior.
Well the suggestion is to find out. Many of us feel quite certain it does. Sam Harris is a good example. What he thinks is if we realise that someone who behaves badly is unlucky that circumstances beyond his control lead to that behaviour and we ourselves are merely lucky that didn't happen to us, we have more empathy and less hatred, which is just part of why we think it is making a significant difference.
Perhaps to clarify, I see whether or not we actually have free will to be a subtle problem in physics which, as yet, has not been answered.
Well I don't see how that applies to free will as I defined it and that is "the common sense" view because there is a common view that if determinism is true we can't have free will and that is because, if you think it through, that if determinism is true we could not have done otherwise unless circumstances beyond our control had been different.
But even if science comes to the conclusion we do not have free will, I don't see that as having any cause for concern except perhaps among some philosophy students. Groups will continue to punish immoral behavior to the extent it is in the group's interests to do so.
This is rather a different point. The question is about the harmful effects of belief in Libertarian free will. Anyhow I'll leave it there unless you want to get into it, since I don't want to take over the thread with what is just a suggestion. Stephen
Whether we actually have free will (in the common sense) or not seems to me best understood as a subtle problem in physics, not moral philosophy.
No. The question if we have free will or not has not much to do with physics. Only when it would turn out that our behaviour is ruled by quantum randomness, we could say we have no free will. The hints however go in the opposite direction: quantum randomness can be neglected for human behaviour. For moral philosophy free will is not a problem: quite the opposite, it is a necessary presupposition. Morality without humans that can choose to behave according to them does not make much sense. The only way that free will can play a role in morality is the question if there are exemptions of punishment in case of people that are not capable of moral deliberation. But then we recognise immediately what free will means: being a moral agent. And that has nothing to do with physics (or genetics). Concerning the main topic of your article: yes I think that the principle 'increasing flourishing and reducing harm' should be the basis of every form of morality. However, cultures differ, and so the more specific peculiarities of morality may also differ. But I would like to add 'for everybody, human or animal, on the long term' to the principle. Very often morality is only applied to the own group, excluding people with a different cultural background or other value system. But that is then exactly in this second set of three 'moral foundations' of Haidt. It is my fault we are getting off the topic of the thread because I carelessly mentioned a controversial position concerning free will. How about I start a new thread on that subject in the next few days? Then I can lay out the case that "Groups will continue to punish immoral behavior (as defined by the group) to the extent it is in the group’s interests to do so" even if they agree there is no such thing as free will. What it comes down to is understanding what the ultimate goal of enforcing moral codes is and not confusing a useful moral heuristic, "ought implies can", for an imperative moral principle. Understanding morality as the product of biological and cultural evolutionary processes can provide a lot of useful knowledge about moral 'means", costly cooperation strategies. How we come to agree on moral ends, such as how to define in-groups and out-groups and if moral regard should be expanded to all conscious beings may rely on different sources of knowledge. I sympathize with adding "for everybody, human or animal, on the long term to the principle", but don't know an objective basis for balancing the well-being of animals versus people. Any suggestions - I assume from moral philosophy rather than science? In any event, it sounds like you read the article. I hope you found it interesting.
It is my fault we are getting off the topic of the thread because I carelessly mentioned a controversial position concerning free will. How about I start a new thread on that subject in the next few days?
Don't! Unless you want to suffer... ;-)
Then I can lay out the case that "Groups will continue to punish immoral behavior (as defined by the group) to the extent it is in the group’s interests to do so" even if they agree there is no such thing as free will. What it comes down to is understanding what the ultimate goal of enforcing moral codes is and not confusing a useful moral heuristic, "ought implies can", for an imperative moral principle.
Well, maybe that could be interesting. Please go ahead, elaborate a little more on that. From what you've written here it is not quite clear to me what you mean.
I sympathize with adding "for everybody, human or animal, on the long term to the principle", but don't know an objective basis for balancing the well-being of animals versus people. Any suggestions - I assume from moral philosophy rather than science?
I am not a specialist in ethics, so take everything I say with a grain of salt... I don't think there is an objective basis for ethics. The principle you mentioned does very well as a basis, and we can go in ever more detail to see how it works out in all kind of situations, but ethics will never get the form of a science, in which we can discover in more and more detail how nature works. Which does not mean however that ethics should not be rational or is just subjective. It isn't. In science the promise of a possible consensus exists, because 'the truth is out there'; but in ethics it is 'between us'. The problem with animals is that they cannot have a rational discourse with us. On the other side, they can be very clear about the fact if they are suffering or feel well. As humans I think we are obliged to 'read their messages' and act according to them.
In any event, it sounds like you read the article. I hope you found it interesting.
I especially liked the way Haidt distinguishes these 2 groups of moral foundations: on one side care/harm, fairness/cheating, and liberty/oppression, which seem a pretty close derivation of the 'main principle', and on the other side loyalty/betrayal, authority/subversion, and sanctity/degradation, which seem more improper values as a basis for morality. (I hope 'improper' is the right word here... English is not my native language.)
In any event, it sounds like you read the article. I hope you found it interesting.
I especially liked the way Haidt distinguishes these 2 groups of moral foundations: on one side care/harm, fairness/cheating, and liberty/oppression, which seem a pretty close derivation of the 'main principle', and on the other side loyalty/betrayal, authority/subversion, and sanctity/degradation, which seem more improper values as a basis for morality. (I hope 'improper' is the right word here... English is not my native language.)
Actually, Haidt does not, so far as I have read, make that distinction between in-group morality and morality related to interactions with out-groups. His work just describes the empirically found six "moral foundations". This distinction is based on my own work in the science of morality. My data set by which I judge hypotheses includes 1) empirical results such as Haidt's 6 foundations, 2) experimental results such judgments about moral dilemmas, 3) 'moral' emotions such as empathy, loyalty, guilt, shame, and indignation, and 3) past and present enforced moral codes. It might be improper for me to expound on my own, unpublished and unreviewed ideas on the Evolution This View of Life website so you may not see that. But I do plan to talk about subjects like "What is the science of morality's data set?", and "What is the ultimate source of morality? Biology, culture, or what?" where I can refer to the range of ideas on the subject and thereby refer to mine.

Morality is a system for governing/influencing social behavior. Any animal that relies on others for survival or in order to thrive (to the point of reproduction), has likely evolved social behaviors and hence may have the potential to behave “morally”. We humans go beyond that, in that we have evolved advanced verbal behavior abilities that allow us to codify morals, to examine them, to pass them on, and to abide by them, or not.

Morality is a system for governing/influencing social behavior. Any animal that relies on others for survival or in order to thrive (to the point of reproduction), has likely evolved social behaviors and hence may have the potential to behave "morally". We humans go beyond that, in that we have evolved advanced verbal behavior abilities that allow us to codify morals, to examine them, to pass them on, and to abide by them, or not.
I would add that the emergence of culture forever unhitched morality from being only about reproductive fitness benefits of cooperation. After the emergence of culture, people were able to enforce moral codes that produced whatever benefits of cooperation that people found attractive, such as the emotional experience of well-being, even at the cost of reproductive fitness.

Yes, Mark, it’s called maladaptation. Seems like peoples with the “most advanced” sense of morality experimence the lowest fertility.

Morality is a system for governing/influencing social behavior. Any animal that relies on others for survival or in order to thrive (to the point of reproduction), has likely evolved social behaviors and hence may have the potential to behave "morally". We humans go beyond that, in that we have evolved advanced verbal behavior abilities that allow us to codify morals, to examine them, to pass them on, and to abide by them, or not.
I would add that the emergence of culture forever unhitched morality from being only about reproductive fitness benefits of cooperation. After the emergence of culture, people were able to enforce moral codes that produced whatever benefits of cooperation that people found attractive, such as the emotional experience of well-being, even at the cost of reproductive fitness. Yes, I think that human morality is not only about it's biological reproductive benefits. However, in order to be sustained, cultures must also have mechanisms for being passed on. Some of the most successful cultures (in terms of sheer numbers of persons identifying with a particular culture) include mechanisms for promoting biological reproduction of its members. (e.g., Be fruitful and multiply and NEVER get an abortion.) And then, there is also the continuing influence of our biologically evolved propensities on the morality that we choose or develop.

A what, Tim? What does all that mean?

Morality is a system for governing/influencing social behavior. Any animal that relies on others for survival or in order to thrive (to the point of reproduction), has likely evolved social behaviors and hence may have the potential to behave "morally". We humans go beyond that, in that we have evolved advanced verbal behavior abilities that allow us to codify morals, to examine them, to pass them on, and to abide by them, or not.
I would like to stress that word: examine. Morality can evolutionary be explained, but it cannot simply be reduced to it. Think about science: it is obvious that it brings evolutionary advantage. But what such an explanation misses, is that science has intrinsic criteria for its progress: truth, the experimentally proven correspondence between ideas and reality. Something similar holds for morality: there are intrinsic criteria for justification of morals, even if they are not as rigid as they are in science. Morals may not be objective, but that does not mean that one can do away with them as being totally subjective, or arbitrary. Morals must fit to the values a society has, and the examination of this is a rational discourse. So seeing morality as a product of evolution is not wrong, and it can help to clarify morals discussions sometimes. But its status cannot be fully explained by it: one must take the intrinsic value and dynamics of it into account.
Yes, Mark, it's called maladaptation. Seems like peoples with the "most advanced" sense of morality experimence the lowest fertility.
Are drones in a bee colony a maladaptation, because they do not produce offspring? It needs at least more than one sentence to make the point that morality as Mark mentions it really is a maladaptation.

It needs only one sentence to say that we are not bees.