Evolutionary moral psychologist Jonathon Haidt’s view of morality

Yes, the whole idea behind a moral code is that it instructs you to do what is not intuitive, or not do what is.
At least Peter Singer agrees with you. Surely not the whole idea. Maybe not even a necessary idea. I think the most important idea is to assure that all people in a group/society/culture react in similar ways. 'Not intuitive' could mean counter intuitive, missing intuition, or contradicting intuitions. For me, as a philosopher, a moral code should in the first place make a rational reconstruction of the ways a society actually behaves when moral decisions are involved: what are the most basic principles a society uses when its members make moral decisions? Then this can help to formulate moral guidelines in new situations, i.e. where our intuitions fail.
Okay. If I were in a position of knowing that I could do something, quite easily, that would result in the death of one stranger (an innocent person AFAIK), AND I also knew that if I did nothing, 5 strangers (all innocent persons AFAIK) would die. Should I do it? Or should I let the 5 strangers die? (If I did nothing, I expect that some others in society might hold me in lower esteem. But if I did what resulted in the death of the one stranger, I might, also, be held criminally and civilly liable, even though I essentially saved the lives of the other 5.) Can science help me decide what to do?
Maybe not, but common sense might. It would tell you to stop wasting your time on twisted hypothetical questions that never come up in real life. %-P Lois
Okay. If I were in a position of knowing that I could do something, quite easily, that would result in the death of one stranger (an innocent person AFAIK), AND I also knew that if I did nothing, 5 strangers (all innocent persons AFAIK) would die. Should I do it? Or should I let the 5 strangers die? (If I did nothing, I expect that some others in society might hold me in lower esteem. But if I did what resulted in the death of the one stranger, I might, also, be held criminally and civilly liable, even though I essentially saved the lives of the other 5.) Can science help me decide what to do?
Maybe not, but common sense might. It would tell you to stop wasting your time on twisted hypothetical questions that never come up in real life. %-P Lois The twisted hypothetical question is just a slant on a question that has been used in research that has lead to insights, for example, on how we humans tend to respond differently to moral questions based on whether or not we image details of an action of violence. I presented the question as I was curious as to what thoughtful responses might follow. %-P !
It might help to clarify that at least three kinds of oughts are in common use: Instrumental oughts - If the farmer has a goal of growing a lot of beans, then he ought (instrumental) to use agricultural science to inform him how to do that. And if a society has a goal for enforcing a moral code of increasing the well-being benefits of living in that society, then they ought (instrumental) to use the science of morality to inform them what moral norms ought (again instrumental) to be advocated and enforced. Cultural normative oughts - In cultures with moral norms that prohibit eating pigs, you ought (Culturally normative) not eat pigs. Universal normative oughts - Moral oughts that would be put forward by all rational, well-informed persons. Claims about universal normative oughts are the bread and butter of mainstream moral philosophy. Science can only inform us about Instrumental oughts, but these instrumental oughts can be about what cultural moral norms we ought to enforce in order to be most likely to achieve goals for enforcing moral codes. To confuse Instrumental oughts with Universal normative oughts is, again, to make a category error.
I think this is way too much overthinking. That's just my opinion. Ought is just a word. It has the same meaning all the time. You are just listing 3 examples of how "ought" can be used in a sentence. Again, that's just my opinion. I see the same "ought" in all three examples above. Morals are what most people, most of the time think we "ought" to be doing. Right? Every word ever devised is "just a word." Lois
Okay. If I were in a position of knowing that I could do something, quite easily, that would result in the death of one stranger (an innocent person AFAIK), AND I also knew that if I did nothing, 5 strangers (all innocent persons AFAIK) would die. Should I do it? Or should I let the 5 strangers die? (If I did nothing, I expect that some others in society might hold me in lower esteem. But if I did what resulted in the death of the one stranger, I might, also, be held criminally and civilly liable, even though I essentially saved the lives of the other 5.) Can science help me decide what to do?
Maybe not, but common sense might. It would tell you to stop wasting your time on twisted hypothetical questions that never come up in real life. %-P Lois The twisted hypothetical question is just a slant on a question that has been used in research that has lead to insights, for example, on how we humans tend to respond differently to moral questions based on whether or not we image details of an action of violence. I presented the question as I was curious as to what thoughtful responses might follow. %-P ! I know. But whatever we say we will do in a particular situation is probably false. Other factors we are unconscious of when we're out of the line of fire suddenly come into he picture. Did you ever notice how many people say they intended to commit suicide and at the last moment they don't? This often happens when someone kills another person with the intention of killing himself as soon as he does it. Then, lo and behold, they change their mind about suicide at the last moment. They say, "I just couldn't do it." It also happens when people have no doubt that they will be brave in a certain situation, then they run away like a scared rabbit when he situation presents itself. The road to hell is paved with good intentions. Lois
I have been thinking of a kind of Rule-Utilitarianism where the "Rule" (perhaps a set of rules) is based in the science of morality's cooperation strategies. That is, the "Rule" would define the moral means to a utilitarian end. As I expect most people would agree, simple Utilitarianism can demand behaviors that are intuitively highly immoral. Science can solve this problem by defining means to utilitarian ends that are inherently harmonious with our moral intuitions, and therefore inherently motivating.
I like how you try to combine science with morality. But what if science ever contradicts our moral institutions? Even if we cannot emotionally stomach a particular way of acting (abortion debate for example), but it has been shown to be better for society as a whole, do we still act according to it? Personally I choose to go where the proof is. .... I.J. The science of morality is rapidly coming to the consensus that increased benefits of cooperation in groups is the selection force responsible for the existence of both cultural moral codes and the biology that underlies our sense of right and wrong, our moral intuitions. But moral codes and moral intuitions are commonly diverse, contradictory, and even bizarre. So how can they all advocate strategies for increasing the benefits of cooperation in groups (their function, the primary reason they exist)? They can all do so chiefly due to differences in 1) who is in privileged in-groups (just men?) and who is in out-groups (women, other races?), 2) different markers of membership and commitment to the society such as circumcision, food prohibitions, loyalty to their group, respect for their authorities, sacred objects, and sacred ideas, and 3) emphasizing different cooperation strategies such as direct and indirect reciprocity and hierarchies. Using your example, one culture’s over-riding moral norm might be “Always preserve life!" and people living in that society might find abortion for any reason intuitively repulsively wrong. Another society might put a higher priority on “freedom" and “increasing well-being", and its members might find abortion morally acceptable. Which culture is ‘morally’ wrong as a matter of science? Neither. Science can identify “Always preserve life!", “reducing suffering", and “freedom" as all useful heuristics (usually reliable, but fallible rules of thumb) for increasing benefits of cooperation in groups. But since science is silent concerning the ultimate goal of this cooperation produced by ‘moral’ behaviors, in the absence of a specified ultimate goal science cannot say that either group is morally wrong. (However, if a society’s ultimate goal for enforcing moral codes is defined, such as increased well-being, then science should be capable of telling us which moral norms concerning abortion are most likely to actually achieve that ultimate goal.) By revealing the universal function of morality, the science of morality may be most culturally useful by revealing that norms and intuitions such as “Always preserve life!", “reducing suffering", and “freedom" are not fundamental moral principles at all, but merely necessarily fallible heuristics for increasing the benefits of cooperation in groups. This objective knowledge should then lead morality arguments away from contradictory intuitions about specific acts (moral heuristics) and to arguing about what the ultimate goal of enforcing moral codes is going to be. Most groups should be able to agree that the ultimate goal of enforcing moral codes is something like “increase well-being". If groups can agree on that, then objective science is fully capable of informing us as to what moral norms are most likely to achieve that goal. Depending on what you agree is the ultimate goal of enforcing moral codes, you might be forced by rational argument to conclude that your intuitions about abortion are simply wrong and a product of the happenstance of your cultural or personal history. Like you, I have been looking for ‘proof’ concerning claims about morality. I, and others, have found that 'proof' (provisional truth) in science’s view of the universal function of morality. Specifically, increasing the benefits of cooperation in groups is the cross-species universal and eternal function of morality in the same sense that the mathematics that define those cooperation strategies are cross-species universal and eternal.
Okay. If I were in a position of knowing that I could do something, quite easily, that would result in the death of one stranger (an innocent person AFAIK), AND I also knew that if I did nothing, 5 strangers (all innocent persons AFAIK) would die. Should I do it? Or should I let the 5 strangers die? (If I did nothing, I expect that some others in society might hold me in lower esteem. But if I did what resulted in the death of the one stranger, I might, also, be held criminally and civilly liable, even though I essentially saved the lives of the other 5.) Can science help me decide what to do?
If an ultimate goal for enforcing moral codes is specified, then yes I expect science could inform us as to which moral norms and therefore which act would be more likely to actually achieve that goal and is therefore moral. (Note science is silent as a matter of logic concerning what our ultimate goals must or ought to be.) You might read my response to I.J.'s similar question about abortion, post #25.
For me, as a philosopher, a moral code should in the first place make a rational reconstruction of the ways a society actually behaves when moral decisions are involved: what are the most basic principles a society uses when its members make moral decisions? Then this can help to formulate moral guidelines in new situations, i.e. where our intuitions fail.
I would be interested in what you think of my response to I.J. in #25. The large contribution that science makes to morality is revealing morality’s universal function (a claimed descriptive fact about past and present moral codes and moral intuitions). As part of the process of uncovering that universal function, science reveals the origins and function of norms such as “abortion is wrong", “homosexual behavior is wrong", “women must be submissive to their husbands!", “eating pigs is wrong", and “rejecting your religion deserves death!" It seems to me that, by revealing the sometimes silly and sometimes shameful origins of such norms, people will be able to have more sensible conversations about whether such norms should be enforced in their society or abandoned.

How about “survival of the species”? Could that be an ultimate goal?
If so, I will still have to wait for science to tell me which course of action is most likely to achieve that. Kill the one stranger by acting or let the 5 strangers die by not acting.
In the meantime, I hope the scenario does not actually come up.
(It just occurred to me that if my society’s ultimate goal is “harmonious social interaction”, I am probably being immoral by not using emoticons, more often, to indicate when I am being completely serious, and when I am sort of joking, as I have a tendency to do both, and sometimes rather haphazardly… But then, again, maybe my society’s ultimate goal is “maximize amusement for all”.)

Maybe not, but common sense might. It would tell you to stop wasting your time on twisted hypothetical questions that never come up in real life. %-P Lois
Actually these scenarios happen all the time. In real life. Ask a firefighter/EMS, combat leader(Sgt. or Lieutenant for example), or triage doctor in an on scene mass casualty environment.
Maybe not, but common sense might. It would tell you to stop wasting your time on twisted hypothetical questions that never come up in real life. %-P Lois
Actually these scenarios happen all the time. In real life. Ask a firefighter/EMS, combat leader(Sgt. or Lieutenant for example), or triage doctor in an on scene mass casualty environment. When did you (or anyone you know) ever find yourself "in a position of knowing that I could do something, quite easily, that would result in the death of one stranger," knowing that "if I did nothing, 5 strangers would die"? When has this ever come up? I must say, in my whole longish life nothing quite so cut and dried has ever presented itself, nor do I know anyone else who has experienced anything like it. But your life may be much more exciting than mine. %-P Lois
When did you (or anyone you know) ever find yourself "in a position of knowing that I could do something, quite easily, that would result in the death of one stranger," knowing that "if I did nothing, 5 strangers would die"? When has this ever come up? I must say, in my whole longish life nothing quite so cut and dried has ever presented itself, nor do I know anyone else who has experienced anything like it. But your life may be much more exciting than mine. %-P Lois
I didn't take it literally. It seemed more like a general idea of those principles. A quick way of describing a situation in which people would have to make moral decisions based on "the lesser of two evils". I think TimB was just looking to extrapolate on that Lois, not on nitpicking syntax or dotted "I"s or crossed "T"s.

I am skeptical about how science could function effectively to guide our development of morals, even if we were able to come to an agreement about our ultimate goal/s. Perhaps it would be better, theoretically, than what we have now. But a process, put into place in actuality, would be subject to abuse. And even if that problem were overcome, the history of science, and even the process of good scientific inquiry, results in changes in what we “know”, as new data comes in.
I would feel bad if my scientifically guided moral resulted in me killing the one stranger, then as new scientific studies came in, the science determined that I actually should have let the 5 strangers die.

When did you (or anyone you know) ever find yourself "in a position of knowing that I could do something, quite easily, that would result in the death of one stranger," knowing that "if I did nothing, 5 strangers would die"? When has this ever come up? I must say, in my whole longish life nothing quite so cut and dried has ever presented itself, nor do I know anyone else who has experienced anything like it. But your life may be much more exciting than mine. %-P Lois
I didn't take it literally. It seemed more like a general idea of those principles. A quick way of describing a situation in which people would have to make moral decisions based on "the lesser of two evils". I think TimB was just looking to extrapolate on that Lois, not on nitpicking syntax or dotted "I"s or crossed "T"s. I know what he was doing. I was just giving him a hard time. I like TimB. He 's a good thinker. I have found that when people say that they will act in a certain way when answering a hypothetical question tells nothing of what they will do if the situation actually presents itself. Nobody can account for emotion and other factors at a critical moment. When we're asked such a question, it's easy to puff ourselves up and see ourselves as braver, smarter and less emotional than we actually are. Sometimes we answer in a way that shows we'd give up. ("If that happened to me, I'd kill myself.") Then reality sets in. So answers to hypothetical questions don't tell us much. Lois

Quoting Lois:

Sometimes we answer in a way that shows we’d give up. (“If that happened to me, I’d kill myself.") Then reality sets in. So answers to hypothetical questions don’t tell us much.
I learned that in the first grade. The teacher was having us write a b and an o with the o connected to the tail of the b. I kept looping the o so it was more a scallop. Miss Fagan said, “{occam}, if you ever manage to get that right, I’ll faint.” So, I drew a b with a long tail, then drew the o under and connected to the middle of the tail. Sure, I was cheating, but I had never seen anyone faint, so it was worth it. When she looked at it, she said, “very good” and walked away. To this day I feel the disappointment because she didn’t live up to her hypothetical statement. :lol:
Occam

O, It’s good that your teacher did not reinforce your cheating by fainting, else you would have, subsequently, been more inclined to cheat.

Perhaps it would be better, theoretically, than what we have now. But a process, put into place in actuality, would be subject to abuse. And even if that problem were overcome, the history of science, and even the process of good scientific inquiry, results in changes in what we "know", as new data comes in. I would feel bad if my scientifically guided moral resulted in me killing the one stranger, then as new scientific studies came in, the science determined that I actually should have let the 5 strangers die.
Good point. Maybe we should try to put asid things which are empiraclly doubtful and put our bets on things on which there is consensus. Gun control is one example. The evidence is contradictory, so I say lets first focus on more important things like maybe changing punishments. From what I have studied (see the link below for 1 ebook; more specifially chapter 4 if I remember correctly), harsh punishments combined with high standards of evidence for conviction are good for reducing crime. http://books.google.com/books?id=ft0r5H1tviUC&printsec=frontcover&dq=editions:T9Cj8_Y80nkC&hl=en&sa=X&ei=KiVTU8HqD4axyATUnoLoAQ&ved=0CC0Q6AEwAA#v=onepage&q&f;=false So we should leave the doubtful (gun control) to what most experts and scholars agree on (changing the justice system)