In the course of most arguments, factual or empirical claims will be made. A factual or empirical claim is one that can actually or in theory be tested. One common type of empirical claim is a generalization and another, which is a close relative of the former, is polling. Lets look at each in turn.
A generalization is when an arguer moves from observations about some specific phenomena or objects to a general claim about all phenomena or objects belonging to that group. For example, I go to McDonald’s and order a Big Mac and it costs $1.50. Then I go to another McDonald’s and order another Big Mac and it also costs $1.50. Based on these observations I generalize to the conclusion that all McDonald’s restaurants will change $1.50 for a Big Mac. (I’m such a great scientist)
Another example would be if I ordered 1000 Tshirts that say “I Love Soviet Uzbekistan”. Upon receiving the order I might look at 5-10 of the shirts in 2 of the 10 boxes to make sure they were printed properly, then from those specific samples I’d generalize to the conclusion that all the Tshirts were printed correctly.
There’s nothing really fancy going on with generalizations. We all do it a lot in our everyday lives because it’s practical and often it wouldn’t make sense to do otherwise.
General vs Universal Claims
At this point we should make a distinction between a general claim and a universal claim. A universal claim is that all X’s have the property Y. For example, all humans have a heart and a brain. Universal claims are much stronger than general claims. General claims admit of exceptions but are generally true of a set of objects. For example, generally students like to sleep. We could possibly point out some counter-examples, like Jittery Joe who doesn’t like to sleep. But, despite the occasional exception, we can accept the generalization as true.
Lets get technical for a moment and formalize these structures:
A universal claim will generally 😉 have the form, “all Xs are Y”.
A general claim will generally have the form, “Xs are, in general Y,” or “Xs are Y,” or “Each X is probably Y”.
Sometimes (surprisingly) in conversation or in an article the arguer won’t spell out for you or use the exact language I’ve specified here to distinguish between the two types of claims, nevertheless, if you pay attention to context, you should be able to determine which is being made.
The main point to understand is that universal claims don’t allow any exceptions whereas general claims do. Also, from the point of view of constructing and evaluating arguments, it is much more difficult to defend an universal claim than a general claim.
One last type of generalization is the proportional claim: As you might expect, this type of claim expreses a proportion. For example, looking through the first 2 boxes of my Tshirts, I notice that 1 out of every 7 is missing the letter “I”. So, even though I don’t check the remaining boxes I conclude that 1 out 7 of the Tshirts in those boxes is also missing an “I”. (i.e., I made a proportional generalization).
The thing is, (as you might expect) there are legitimate and illegitimate generalizations which have much to do with the nature and size of the sample from which the generalization is being made.
Sample Size Me!
For obvious reasons, the larger the sample size, the more accurately it will reflect the properties in the entire group of objects. For instance, if I see one student and she tells me she has student loans, I shouldn’t conclude that all students have loans. Maybe I talk to 3 students and they also tell me they have student loans. It could be that I just happened to talk to the 3 students that have student loans, it doesn’t mean that all students have them. The sample is still too small for me to legitimately make any inferences about all the students.
Now suppose I talk to 400 students and 100 of them (amazing round numbers!) tell me they have student loans. At this point I might be able to make a reasonable generalization about students at that particular school or maybe in that particular region or state.
One worry is that our sample is too small to justify generalizations. The other is that our sample isn’t representative enough of the group about which we are making the generalization. For instance, if I wanted to make a generalization about the rate of US students with student loans it wouldn’t be enough to collect data at only one school. My sample have to have about the same proportion of sub-groups as does the general population I want to generalize about. Maybe my sample happens to be from a rich school. Maybe not. Either way, this doesn’t represent the average school. Maybe that particular state provides excellent funding, maybe not. Again, I want to make sure my sample represents the the proportion of states or schools that do and don’t provide excellent funding.
What is needed is to take samples from all over the country to have a representative sample of the larger group. That is, the sample from which we will generalize should be broad enough to negate the clumpiness and should be representative of the group we are trying to generalize about.
In terms of evaluating and constructing arguments beware of anecdotal evidence! Why? Remember biases? Biases have a huge influence over what gets reported and what doesn’t. If we experience something that runs against our bias we tend to ignore it. While on the other hand, we over emphasize experiences that conform to our biases. When we are using testimony as evidence (i.e. anecdotal evidence), we should be aware of this and how it increases the likelihood that our sample is biased.
Here are some common sources/common examples of bias: “it worked for me (or my Aunt Martha), therefore it works for everyone”. Another common bias (and a huge issue/problem in social psychology right now) is generalizing about human behavior from samples that have a geographical bias.
In other words, for decades social psychologists and psychologists have been making generalizations about all of human psychology from samples of US college students. As it turns out (from recent cross cultural studies) US culture is an outlier in terms of what’s “normal” psychology throughout the world. Yup. We’re the weird ones, not the rest of the world. Oh! Snap! (but of course our way of being is the right way!)
In medicine, great lengths are gone to to protect against a biased sample. The gold standard is double-blind, placebo controlled, long-term replicated study that includes different populations and both sexes (i.e., ethnic groups). The halmark of pseudoscience in medicine is that often these standards are not applied or the sample is too small.
Rules for Good Generalizations
We can think of generalizations as following (implicitly or explicitly) the following argument scheme:
(S=the sample group, X=the group of objects that the generalization will be about, Y=the property we’re attributing to Xs)
P1. S is a sample of Xs.
P2. The proportion of Ss (that are part of X) that have property Y is Z.
C. The proportion of Xs that have property Y is Z*.
*see rule 4 below.
Lets use an actual example to get away from the alphabet soup:
P1. The students in this class are a (representative) sample of UNLV students.
P2. The proportion of the students in the class that have student loans is 60%.
C. Therefore, the proportion of UNLV students with loans is around 60%.
P1 10 species of cats is a sample of all the species of all cats.
P2 The proportion of cats in the sample that land on their feet when dropped from over 4 feet is 100%.
C Therefore, all cats will land on their feet when dropped from over 4 feet.
To evaluate generalizations we essentially want to scrutinize P1 and P2 and their logical connection to C. To do so we ask if
1) The sample size is reasonable for the scope of the generalization.
2) The sample avoids biases.
3) Objects/Phenomena in the sample (X) do indeed have the property Y.
4) The proportion of X with property Y in the sample is greater or equal to the claim about the proportion of Xs with property Y in the generalization. (In other words, I can’t say that 30% of Xs in my sample have property Y, yet in generalize that therefore 40% of Xs have property Y.)
If a generalization violates one of these 4 criteria then it likely isn’t a defensible generalization.
Famous Polling Fails in US History
Polling is a subset of generalizations so many of the rules for evaluation and analysis will be the same as in the previous section for generalizations. Polling is a generalization about a specified population’s beliefs or attitudes. For example, during election campagnes, the population in important “battleground” states are usually polled to find out what issues are important to them. Upon hearing the results, the candidate will then remove what’s left of his own spine and say whatever that population wants to hear. (Meh! Call me a cynic!)
Suppose I were to conduct a poll of UNLV students to determine their primary motivation for attending university. To begin the evaluation of the poll we’d need to know 3 things:
(a) The sample: Who is in the sample (representativeness) and how big was that sample.
(b) The population: What is the group I’m trying to make the generalization about.
(c) The property in question: What is that belief, attitude, or value I’m trying to attribute to the population.
Recall from the previous section that generalizations can be interpreted as having an (implicit or explicit) argument form. Lets instantiate this argument structure with a hypothetical poll. Suppose I want to poll UNLV students with the question, “should critical thinking 102 be a graduation requirement?” Because I have finite time and energy I can’t ask each student at the university. Instead I’ll take a sample and extrapolate from that. My sample will be students in my class.
P1. A sample of 36 students from my class is a representative sample of the general student population.
P2. 65% of the students in my class (i.e., the sample) said they agree that critical thinking 102 should be a graduation requirement.
C. Therefore, we can conclude that around 65% of UNLV students think that critical thinking 102 should be a graduation requirement.
The are 2 broad categories of analysis we can apply to the poll results:
Questions about sampling errors apply to P1, which are basically: (a) is the sample size large enough to be representative of the group and (b) does the sample avoid any biases (i.e., does it avoid under or over representing one group over another in a way that isn’t reflective of the general population).
Regarding sample size, national polls generally require a (representative) sample size of 1000, so we should expect that a poll about the UNLV population could be quite a bit less than that. Aside from that, (a) is self explanatory and I’ve discussed it above, so lets look a little more closely at (b).
The question here is whether the students in my class accurately represent all important subgroups in the student population. For example, is the sample representative of UNLVs general populations ratio of ethic groups, socio-economic groups, and majors? You might find that there are other important subgroups that should be captured in a sample depending on the content of the poll.
Someone might plausibly argue that the sample isn’t representative because it disproportionately represents students in their 1st and 2nd years.
We can ask a further question about how the group was chosen. For example, if I make filling out the survey voluntary then there’s a possibility of bias. Why? Because it’s possible that people who volunteer for such a survey have a strong opinion one way or another. This means that the poll will capture only those with strong opinions (or those who just generally like to give their opinion) but leave out the Joe-Schmo population who might not have strong feelings or might be too busy facebooking on their got-tam phone to bother to do the survey.
In order to protect again such sampling errors polls should engage in random sampling. That means no matter what sub-group someone is in, they have an equal probability of being selected to do the survey. We can also take things to a whole. nuva. level. when we use stratified sampling. With stratified sampling we make sure a representative proportion of each subgroup is contained in the general sample. For example, if I know that about 30% of students are 1st year students then I’ll make sure that 30% of my sample randomly samples 1st year students.
Another thing to consider in sampling bias is margin of error. The margin of error (e.g. +/-5%) measures the likelihood that the data collected is dependable. Margin of error is important to consider when there is a small difference between competing results. For example, suppose a survey says 46% of students think Ami should be burned at the stake while 50% say Ami should be hailed as the next messiah. One might think this clearly shows Ami’s well on his way to establishing a new religion but we’d be jumping the gun until we looked at the poll’s margin of error.
Suppose the margin of error is +/- 5%. This means that those that want to burn Ami at the stake could actually be up to 48.3% ((46x.05)+46) and those that want to make him the head of a new religion could be as low as 47.5% ((50x.05)+50). Ami might have to wait a few more years for world domination.
As I mentioned in the beginning of this section, questions about sampling error are all directed at P1; i.e., is the sample representative of the general population about which the general claim will be made. Next we will look at measurement errors which have to do with the second premise (i..e., that the people in the sample actually do have the believes/attitudes/properties attributed to them in the survey).
Measurement errors have to do with scrutinizing the claim that the sample population actually has the believes/attitudes/properties attributed to them in the survey. Evaluating polls for measurement errors generally has to do with how the information was asked/collected, how the questions were worded, and the environmental conditions at the time of the poll.
As a starting point, when we are looking at polls that are about political issues, we should generally be skeptical of results–especially when polling agencies that are tied to a political party or ideologies produce competing poll results that conform with their respective positions. In short, we should be alert to who is conducting the poll and consider whether there may be any biases.
One specific type of measurement error arises out of semantic ambiguity or vagueness. For example, suppose a survey asks if you drive “frequently”. This is a subjective term and could be interpreted differently. For some people it might mean 1x a week, for others once a day. A measurement error will be introduced into the data unless this vagueness is cleared up. Because more people probably think of “frequent drinking” as being “more than what I personally drink”, the results will be artificially low. They also will not very meaningful because the responses don’t mean the same thing.
Another type of measurement error arises when we consider the medium by which the questions are asked. Psychology tells us that people are more likely to tell the truth when asked questions face to face and less so when asked over the phone. Even less so when asked in groups (groupthink). These considerations will introduce measurement errors; that is, they will cast doubt on whether the members of the sample actually have the quality/view/belief being attributed to them.
When evaluation measurement accuracy we should also consider when and where the poll took place. For example, if, during exam periode, students are asked whether they think school is stressful (generally), probably more will answer in the affirmative than if they are asked during the 1st week of the semester.
Also, going back to our poll of students concerning the having critical thinking as a graduation requirement, we might argue that the timing is influencing the results. The sample is taken from students currently taking the class. Perhaps it’s too early in their career to appreciate the course’s value; yet if we asked students who had already taken the course and have had a chance to enjoy the glorious fruits of the class, the results might be different.
Finally, we should be alert to how second-hand reporting of polls can present the results in a distorted way. Newspapers and media outlets want eyeballs, so they might over-emphasize certain aspects of the poll or interpret the results in a way that sensationalizes them. In short, we should approach with a grain of salt polls that are reported second-hand.
To summarize: For polling we want to evaluate (1) do the individuals in the sample actually have the values/attitudes/beliefs being attributed to them; (2) is the sample free of (a) sampling errors and (b) sampling measurement errors.