What is a Justified Belief? Goldman and Reliablism

Introduction and Context
“A topic that ain’t given no ‘spect”  
—Renowned Philosopher D. Gurney

Introduction and Context
So far we’ve been looking at Descartes, Locke, and Berkeley’s account of what is possible for us to know. Common to all their accounts is that in order for our beliefs to be justified, we have to be able to offer some sort of argument. You just can’t go running around saying you believe there are tables and chairs without any supporting argument!

In other words, traditionally, justification for a belief entails that the believer
(1)  has an argument/reason (i.e., justification) for why they hold their belief,
(2)  has conscious access to the justification for their belief,
(3)  can present that justification when asked for it.

Goldman disagrees.
He denies 1, 2, and 3!
“A process reliable,
makes justification undeniable,
That defines it for me!”

If you aren’t a master at poetic interpretation, Goldman’s reliablism is that justification for a belief is a function of the reliability of the process that created the belief.  That is, if a belief is acquired through a process that reliably produces more true beliefs than false beliefs, then the believer has a justified belief.

Now here’s the crazy part.  It doesn’t matter if a particular belief produced from a reliable process is false.  Reliable processes can sometimes produce false beliefs.  To qualify as a reliable process, the process needn’t produce true beliefs 100% of the time.  This standard is too high.  All that is needed is that, in the long run, the process produces more true beliefs than false ones.

To many who see rationality as central to justification, this is a philosophical outrage!  To make matters worse, Goldman’s going to agree with Gettier in that justification and knowledge are not one and the same:  You can have a belief that is justified but that is not knowledge!  Before we go on a rampage, leaving behind us a smoldering trail of broken collections of sense-data, let’s hear Goldman out…

Vocabulary:
Ok, I lied.  We’re not going to look at Goldman right away because we first need to cover some vocabulary and explain the alphabet soup of some technical philosophical writing.

Lets get the vocabulary out of the way first:  
Explanans vs explanadum (explanada for plural):  This is really just a fancy way of referring to the explanation and the-thing-you-are-trying-to-explain.  So, explanandum is the thing we’re trying to explain; in this case, we’re trying to explain what it means for a belief to be justified.  The explanans is the explanation that we use to to give an account of the thing we’re trying to explain (i.e., the explanandum).  Anyhow, these words pop up sometimes in academic philosophy so they’re good to know…and now you can impress your friends!  (E.g., “What do you suppose the explanans is for why so-and-so broke up?”) 

Philosophy Alphabet Soup
In contemporary philosophy there can be a lot of “alphabet soup” depending on the philosopher. Variables are used to stand in for classes of things.  It’s actually no different from using variables in math, except in philosophy there are some conventions regarding which variables represent which types of things–most of which are intuitively obvious.  Here are a couple standard ones:

S: This stands for “subject” as in “some person” who is (usually) doing or believing something.  For example: Suppose S is walking down the street…or S believes that cats like milk.
p: This stands for “proposition” as in “some assertion” that is either true or false but not both.  For example: S believes p.  We could substitute any person for ‘S‘ and we could insert any assertion for ‘p‘.
t:  This one’s pretty easy.  “t” stands for “at some point in time.”  So, me might say, S believes p at t1.
C:  This stands for “some cognitive state”.  For example, S’s belief in p at t1 is a result of C1.

There are others but this should be enough for now…  OK, let’s check out Goldman’s reliablism.

Defining a Reliable Process:
For our purposes, I’m not too concerned about the details of how Goldman eventually arrives at his definition of a reliable process, but I’d like to point out a few important points:

The Project: An Explanation of our Intuitive Sense of Justified Belief
Goldman’s project is not to tell us which processes are reliable or even how we can verify if a process is reliable (although this is arguably a problem); what he’s trying to do is to give an account of what we mean when we say someone is justified in holding a certain belief.  Although it’s not made explicit, it’s important to notice that his method is to give an account of justification that aligns with our intuitions. In short, his project is descriptive, not prescriptive. 

Two Definitions of Reliable Processes
Fairly early in the article, Goldman proposes a definition of a reliable process for forming true beliefs:

(5)  If S’s believing p at t results from a reliable cognitive process (or set of process), then S’s belief in p at t is justified.

Basically he’s saying that a justified belief is one which you acquired through a reliable cognitive process (where “reliable” is any process that produces more true beliefs than false ones over the long run).

He then goes on to make a further distinction between the two types of belief-forming processes: belief-dependent and belief-independent.  He wants to distinguish these two types because some belief-forming processes (i.e., belief-dependent ones) use beliefs as input, and if the input is false, then the output will also be false.  But this shouldn’t count as strike against the process.

Consider an example of a belief-dependent process. Suppose you have to do some basic arithmetic for your taxes.  If you begin with the wrong numbers (i.e., inputs) then you’ll likely come up with wrong totals at the end (i.e., outputs), but this doesn’t mean that the rules or processes of arithmetic aren’t reliable.  It only means that you began with “false beliefs” (i.e., the wrong numbers).  If you’d begun with the correct numbers, the rules of arithmetic would have reliably yielded true answers.

Examples of belief-dependent processes might be reasoning and inferences made from memories (if you have a false memory, the inference will also likely be false).

A belief-independent process would be something like beliefs formed from sensory perception.  I believe there’s a wall in front of me because I see it, not because I’m inputing a prior belief into some cognitive process.  I see.  I believe. There’s no manipulation of an input belief to yield an output belief.

Anyhow, because of these two classes of processes he divides his original formulation in (5) into 2 sub-formulations

(6A)  If S’s belief in p at t results (“immediately”) from a belief-independent process that is (unconditionally) reliable, the S’s belief in p at t is justified. 

(6B)  If S’s belief in p at t results (“immediately”) from a belief-dependent process that is (at least) conditionally reliable, and if the beliefs (if any) on which this process operates in producing S’s belief in p at t are themselves justified, then S’s belief in p at t is justified. 

Reliablism vs Other Accounts of Justification
Now that we have definitions for reliable processes, we can compare Goldman’s theory with other theories we’ve encountered thus far.  Before doing that, let’s just quickly recap what Goldman’s theory is.

Reliablism is the idea that a belief is justified if and only if it is the product of a reliable cognitive process.  The belief could be true or false; it doesn’t matter for justification.  The only thing that matters for justification is how the belief was formed.  For this reason Goldman calls reliablism an historical or genetic theory. Within epistemology, reliablism is considered an externalist theory because what justifies a belief can be external to the agent’s consciousness. 

We can compare historical theories with “current time-slice” theories.  Current time-slice theories tie a belief’s justification to what is true of the believer at the time of the belief: for example, can he give a good argument for his belief?  Or is it coherent with his other beliefs? Current-time-slice theories are more generally called internalist theories of justification because what justifies a belief is internal to the agent’s conscious awareness. 

Let’s look at the two current time-slice theories we’ve encountered so far:  First we encountered foundationalism with Descartes, Locke, and Berkeley.  Their account of justification is that a belief is justified if and only if the believer can, at that moment, give a sound argument for the belief.

We’ve also encountered a coherence theory with Russell. Under this theory, a belief is justified if and only if it doesn’t conflict with any of the believer’s other current beliefs.

Justified Belief without Knowing You’re Justified
Unlike those other two theories, reliablism says we can be justified in holding a belief even though we don’t know we’re justified!  How the crap does that work, you ask?  Suppose a long time ago you acquired a belief through a reliable cognitive process and have since forgotten why you hold that belief, yet you still hold the belief.  That is, if asked, you cannot give a justification for it.  However, under reliablism, the mere fact that the belief was acquired via a reliable process means you are justified in holding it despite the fact that you can’t give any reasons or arguments for it now.

Objection 1:  The Definition Is Too Narrow, It Doesn’t Capture All Possible Types of Justification
Goldman anticipates two possible objections to his theory. The first is that his definition of a reliable process excludes justifications for some types of beliefs that we would agree are justified.  That is, there are some beliefs that are justified but are not justified because of the process by which they were formed.

These types of beliefs include: beliefs about our current psychological state, intuitive beliefs about elementary logic (e.g., that something cannot both be and not be at the same time), and beliefs about conceptual relationships (all cats are mammals).

His reply to the first type is that we obtain our beliefs about our current psychological state through introspection, which, he argues, is also a type of retrospection and therefore doesn’t only rely on a current time-slice of my mental states. Basically, even though it occurs in a very short period of time, a belief (about my own psychological state) is derived from a (reliable) cognitive process that had a (short) causally connected chain of events over time, and so it is captured by his definition of justification.

The second and third types of beliefs are also the result of cognitive processes that, although very quick, involve a chain of beliefs and/or mental states that reach back in time, as opposed to depending only on a set of beliefs and/or mental states that exist at the present time-slice. Therefore, contrary to his critics, they are also captured by his definition.

Objection 2:  Is Reliability Sufficient for Justification?
The next objection puts pressure on the notion of reliability as being enough, on its own, for justification.  Essentially, Goldman’s position is that so long as a process produces more true beliefs than false beliefs, then we can call the process “reliable” and in turn say that the beliefs it produces are justified.  In short, if the process is reliable, then the belief is justified.

But there’s an easy counter-example:  Typically, we think that wishful thinking is not a reliable process for producing true beliefs and therefore justifiable beliefs.  However, suppose there’s a world in which a benevolent demon (Descartes’ evil demon’s good little brother) makes everyone’s wishes come true. For example, if you believe that pigs fly, then pigs will fly.  

In such a world we are forced to say that beliefs derived from wishful thinking are justified (because they produce more true ones than false ones).  In this world, wishful thinking is a reliable process.  But even though the process is reliable in this case, it seems odd to say the believer is justified in holding the beliefs.  We don’t want a theory that says wishful thinking is a reliable process.

Reply 1:  Accept
One possible reply is to bite the bullet and say, “well, I guess in that world, the process is reliable and so the consequent beliefs are justified”.  However, some people might resist this because we “know” that wishful thinking isn’t normally a reliable process for producing true beliefs and any theory that says wishful thinking is a reliable process for true beliefs is not very good.  Let’s try a different reply…

Reply 2:  Non-manipulated Environments
Another possible reply would be to add a further qualification to the definition that stipulates that a belief is justified if it comes from a process that is reliable in non-manipulated environments.  That is, a process is reliable if it functions in “natural” situations, not implausible demon-haunted worlds where the environment is “messed with” in order to mislead us regarding a processes reliability. 

Reply 3:  Here on Earth
A further possible reply is to add different qualification.  We can say that a belief is justified if and only if it results from a process that is reliable here on earth as we know it.  Forget about the fantastical on other possible planets where a process that doesn’t work here works over there.  The only beliefs that we will call justified are the ones that are formed by process that we know to be reliable here on earth, not on crazy philosopher-imagined worlds.

Things to Think About:
Goldman’s explanation of what we mean by a justified belief relies heavily on our intuitions on the matter.  What role do you think our intuitions play and should play in discerning truth?  What should we do if we have internally conflicting intuitions?  What if two people or two groups have conflicting intuitions?  What is the relationship between intuitions and truth, especially in non-empirical matters such as epistemology and, perhaps, morality?

A Few Possible Problems for Reliablism

The Cartesian Demon:

Suppose there’s a Cartesian demon or you’re in the matrix (and you haven’t taken the blue pill yet). Typically we think of perception as a reliable process thereby creating justified beliefs.  So, perception =a reliable process.  But in these hypothetical cases, the resulting perceptual beliefs will not be reliably producing true beliefs.  So, which is it?  Is perception a reliable process or not?

Someone who’s a current-time slice theorist (especially Berkeley) doesn’t have this problem.  He’s going to say, “look, all I can know is what I directly perceive, so whether I’m in the matrix or I’m a brian in a vat, I can know what I directly perceive (the clusters of sense data in my conscious experience)”.

Mr. Truetemp (Lehere 1990):
Mr. Truetemp, unbeknownst to him, has a thermometer in his head that produces accurate beliefs about the ambient temperature.   He walks into a room and says “it’s 71 degrees.”  He has no knowledge about the process by which he acquires the beliefs. Intuitively, it doesn’t seem that these beliefs are justified.

Bootstrapping/Circularity (Vogel):
Arguably, reliablism requires that we know that a process is reliable.  But how do we independently verify that a process is reliable without appealing to that very same process?  For example, consider visual perception:  without using your perceptual system, how can you verify that your perceptual system reliably picks out the color red whenever it is present?  All you can do is to refer back to past instances, but this involves appealing to your perceptual system and presupposes that it was already reliable previously.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s