Parfit on Moral Disagreement: Part 2

Notes and Thoughts on Moral Disagreement as Discussed in Parfit’s “On What Matters” Ch. 34

Parfit Vs Basic Argument from Disagreement

Suppose you and your counterpart are in ideal epistemic conditions.  You are aware of and agree on all the non-normative facts yet your opinions differ on what the true moral belief is.  In a way, we can see this in the abortion debate and in the vegetarianism debate, to name a few.  Since both parties in these debates often agree on the  non-normative facts, we should not rationally suppose that our intuitions about the issue are true.  The basic idea is that, given epistemic parity, there is no compelling reason, beyond ego, to suppose that our normative belief is the correct one or that our opponent wrong.

Parfit says that the Intuitionist response must be that in such situations where everyone knows all the relevant non-normative facts there wouldn’t be disagreement over moral assessment.  In such ideal conditions, everyone would agree on the normative belief.   For example, if everyone on all sides of the abortion debate agreed on and knew all the relevant non-normative facts, the Intuitionist has to say that they’d all agree on what is the correct normative belief regarding abortion.

Hmmm…I’m not too sure what to think of that claim.  Anyhow,  the bottom line here is that Parfit is making an empirical claim.  He’s saying that, given all the relevant epistemic information, we’d make the same moral judgments.

The cool thing about empirical claims is that they can be tested.  Remember way back in the Intro of Part 1 I axed you to answer the trolly questions? video of trolley problem   Well, here’s the thing.  There’s a branch of philosophy called xphi (as in “experimental philosophy”) where one of the things  they do is psychology-like experiments to test testable philosophical claims.  (check out this link to see some cool xphi experiments)          
There are some xphi experiments that test people’s intuitions on moral questions.  It’s been a while since I read it but I remember reading something on an xphi experiment testing people’s intuitions on the trolly thought experiments.  I don’t remember the details but I do remember that there wasn’t unanimity in moral judgments for what to do in the trolly problems. I’ll have to look it up later for my paper but I think something like 3/4 said they’d pull the lever and 3/4 said they wouldn’t push the fat man.  The odd thing about the thought experiment is despite the outcome being the same (1 person dies to save 5), most people aren’t consistent with their answer.

So how does this apply to Parfit’s claim about er’body having the same intuitions given perfect information?  It seems to falsify it because, in the thought experiment, everyone has the same relevant non-normative facts, yet a statistically significant portion of respondents disagree with the majority.

There’s a possible reply for Parfit here.  In his senario he says that not only would agents have all the same non-normative facts but they’d also be using the same normative concepts (and understand the relevant arguments and not be affected by distorting influences).  So, his way out is that he can say, despite the fact that er’body agrees about the non-normative facts in the trolley experiment, they aren’t all starting with the same normative concepts.  That explains the different outcomes.

Lets pause for a second and look at what Parit (might) mean here by “shared normative concepts”.  Given what he has said in previous chapters we can assume that included in this category of concepts is the notion of external reasons in favour of/against an action.  

A quick review of Parfit’s notion of “reasons”:  For Parfit, a reason is a fact, awareness of which counts in favour of/against a particular action.  This is in contrast to Williams’ notion of a reason which is a psychological account of motivation:  I.e., a reason for action is when an agent had a psychological desire to be satisfied.  If an agent has no psychological desire to be satisfied, then they have no reason for an action.  As I have discussed in several other posts, Williams’ notion of reason is purely internal to the agent whereas, Partit’s is external to the agent and objectively true to all rational creatures. 

Another thing we might include in Parit’s cluster of normative concepts in the idea of intrinsic goods; I.e., things that are good in themselves, not because they lead to further things.  I’m going to hypothesize that he’s including this too.

“I Am a Computer” (*in a robot voice)
Now here’s where things get a little tricky.  I could be totally wrong with this but I’m going to present an analogy to Parfit’s hypothesis.  Imagine a moral decision making computer in the future that has a cool robot voice.  It says, “I am a computer” a lot.   Not only does it talk in a cool robot voice but it solves moral dilemas for us.  How might it do this?  One thing it needs is the data.  On Parfit’s model that will be the non-normative facts.  The next thing it needs is algorithms to interpret and weigh the data.  These will be the normative concepts.  The program will provide the logical structure, which for Parfit will be the arguments (maybe?).   So, Parfit is saying that if two properly functioning computers with cool robot voices had all these elements in common, they give the same output.

Problem 1:  Which normative concepts?
Hmmm…he’s probably right but I’m not sure this is a very meaningful claim.  It seems tautologous to say that provided we shared the same normative concepts we’d come up with the same normative conclusions.  Duh!  In a way it seems like he’s side-stepping the whole problem.  

One of the main reasons for which we have moral disagreement is because we don’t share the same normative concepts. Furthermore, there doesn’t seem to be any objective way to arbitrate between competing normative concepts without already presupposing some prior meta-normative concepts.

Parfit might reply that reasons allow us to arbitrate between competing normative.  But I think this assumes everyone is going to be equally responsive to the same reasons.  I’m not sure that will be the case.  

Problem 2:  Non-testability 
I don’t think this claim is testable.  Lets grant that in making a moral decision, two or more people meet all the right epistemic conditions and are using the same normative concepts.  There’s still another problem.  It is highly unlikely that 2 or more people will ever share every normative concept.  Maybe Bob has normative concepts {A, B, C, D, E} and Mary has normative concepts {A, B, C, D}.  In some cases, concept E won’t play any determining role in the outcome; in others it might.  

Of course, Parfit can reply that just because it’s not actually a testable claim doesn’t make it false.  He’d be right, but it doesn’t make it true either.  It makes it untestable, and so difficult to know how things would actually play out. 

Parfit can also reply that in the case where E is a relevant factor, the disputants don’t share a relevant concepts, and so his hypothesis is right.  After all, his claim is that if disputants shared relevant normative concepts, they’d get roughly the same moral answer.  He has a strong response here, but how significant is the claim that if people share relevant normative concepts, they will get a similar answer?  

Problem 3:  Problem of relevant concepts and weight.
Suppose we run the trolley experiment on some people.  We also check to see if they use the same normative concepts. One possible problem is that while they agree on which concepts are relevant, they might not agree on how the concepts should be applied or weighed.   For example we ask:  “are you applying external reasons to your analysis” and they answer in the affirmative.  It’s possible that 2 people might agree on using the normative concept of external reasons, but they might disagree on what those reasons are and how they should be weighed in a particular situation.

Parfit might respond that people can make mistakes in their application and weighing of normative concepts. Mathematicians can disagree on things but they are still able to recognize mathematical truths.  That there are occasional mistakes doesn’t mean there is no objective truth, it only means that someone has made a mistake.  The method needn’t be infallible to support an Intuitionist account of moral realism–we’re human, we’re bound to err.  We could be deficient is our appraisal of which non-normative facts are relevant, how and which normative concepts should be applied, and there may be distorting influences (like culture/upbringing).  These facts only show that we can get it wrong, not that there are no objective normative truths.

A problem with this reply is that it is non-falsifiable.  Anytime there’s a moral disagreement, Parfit can say either that someone has just made a mistake or that the two parties have slightly different normative concepts and one of them has the wrong one.  There’s always some sort of ex post facto explanation open to him.   

Distorting Influences
Parfit partly acknowledges this problem by saying we can’t simply claim that someone has been subject to a distorting influence anytime they end up with a belief that differs from our own.  He needs to give a more careful description of what constitute distorting influences.  He gives self-interest as a legitimate distorting factor of which we should be wary in our moral reasoning.  True dat…unless you’re an egoist, in which case it is an elucidating factor…different normative concepts…oh! snap!

Parfit concludes this section by setting the bar for what is required for the Intuitionist account of moral realism to be true.  “Intuitionists need not claim that, in ideal conditions these disagreements would all be completely resolved.  But they must defend the claim that, in ideal conditions, there would not be deep and widespread moral disagreement.”  

I can dig that.  However, a problem I foresee is how Parfit will show that convergence over values is a product of there being objective moral truths instead it being a product of cultural convergence or evolutionary forces.  Both stories could account for moral agreement.  What would really be compelling for the moral realist’s case is to give examples of widely divergent cultures that share significant moral values. That, in my estimation, would carry some weight.  

Note: I wrote this before reading the entire chapter.  Later in the chapter Parfit makes stronger defenses of his claims and addresses some of my worries.  Why does he have to be so reasonable?  How am I supposed to write a critical paper about someone I’m starting to agree with?  Curse you Parfit!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s