Logical Fallacies Part I:

Post Hoc, Hasty Generalization, and False Dilemma


So let’s begin by developing a shared understanding of a few concepts.


First, let’s say that “to think” is to cognitively process information.  This definition will allow us to distinguish the ways we respond—i.e. emotionally, physically, and intellectually—to the various stimuli around us.


Second, let’s say that “to think critically” is to carefully and logically process information with an awareness of the methods by which we process this information and a commitment to use the best methods available.  This definition will allow us to distinguish between thinking that is fueled by bias, desire, and unexamined assumptions and thinking that is fueled by the need to form well-informed judgments about the worth and meaning of information.


Third, let’s say that “to think carefully and logically with an awareness of our methods and with a commitment to quality” is to examine the “glue” that ties ideas together and to expose the rational (or irrational) relationships between ideas.


Fourth, let’s say that all of the following are barriers to thinking critically:


egocentrism (the assumption that something is true simply because it’s what you personally believe)


sociocentrism (the assumption that something is true because it's what your group believes)


wishful thinking (the assumption that something  is true because it “feels” good to believe it)


absolutism (the assumption that there is one unchanging, inflexible set of criteria for judging actions, ideas, and beliefs)


relativism (the assumption that there are no standards of judgments beyond individual preference)


Fifth, let’s say that critical thinking embraces dynamism and change by accommodating new knowledge and adjusting conclusions accordingly.


And sixth, let’s say that a “critical thinker” is someone who embraces the previous five principles in pursuit of deepening and clarifying his/her understanding of the world.


Okay, all of this probably sounds unobjectionable enough in the abstract.  The question for us is this: how do we know when these principles of critical thinking have been violated?  I know you know some answers to this question.  You know, for example, that a statement based on superstition and gut-instinct is not worth as much as a statement based on verifiable knowledge.  I know you know also that a conclusion based on little evidence is inferior to a conclusion based on ample evidence. 


Post Hoc (Doubtful Cause)

As indicated above, logic and reasoning are like “glue” that bind ideas together.  Logic and reasoning are what allow someone to say that X caused Y or Y is a result of X or X is similar to/different from Y or the choice is between X and Y or you should believe X because Y is true, etc.  We can take two statements and tie them together with logic and reasoning to create a relationship between those statements.  The result is an idea. 

Consider these two statements:


Jerry ate a ham sandwich from the buffet platter at 1:00 pm.

Jerry began vomiting at 1:30 pm.


Those are simply statements of fact.  But we can use logic and reasoning to create a relationship between those facts:


1.      The ham sandwich Jerry ate from the buffet platter at 1:00pm caused him to throw up 30 minutes later.




2.      Jerry threw up because of the ham sandwich he ate.


If you found yourself questioning both of those statements, you’re a critical thinker.  You recognize that for statements #1 and #2, it’s possible that the ham sandwich made Jerry sick, but it’s also possible that something else could have caused Jerry to throw up.  Perhaps he drank 13 cans of beer before eating the sandwich.  Perhaps he had the stomach flu.  Perhaps he ate something else that was tainted.  Perhaps you’re skeptical because you realize that it’s possible that Jerry has a food allergy that caused him to react to the ham sandwich.  In other words, you recognize that there is reason to be skeptical of the logic behind the assertion (i.e. to be skeptical of the proposed “glue” tying the cause to the effect).  You see that there may be reason to doubt that there’s something wrong with the ham sandwiches, and you’d want more information in order to accept the claim.  The intellectually-sophisticated way to say this (or maybe just the shortcut) is to say that there is a “post hoc” or “doubtful cause” logical fallacy in statements #1 and #2.  This doesn’t necessarily mean the conclusions are patently false; it just means that the conclusions are, so far, supported by sloppy or erroneous reasoning.  To eliminate the logical fallacy, we’d need more evidence to tie the statements together in the relationships being described here. 


Post Hoc (or doubtful cause) fallacies often occur when people confuse correlation and causation—in other words, when people believe that two events that happen in the same time period are involved in a cause/effect relationship.  Sometimes, these fallacies are obvious and easy to spot.  The claim that the US Supreme Court’s decision in 1962 to prohibit prayer in public schools led to increases in divorce, STD’s, violent crime, and a host of other social ills is a classic post hoc fallacy.  Yes, prayer was banned from public schools at around the same time the US saw an uptick in incidences of these social ills, but without more to tie these phenomena together, they are nothing more than coincidences.  At around the same time prayer was banned from public schools, the US was becoming more involved in an unpopular foreign war and sending home veterans with post traumatic stress disorder (maybe this is what caused increases in violent crime?); women were gaining more rights legally (perhaps this accounts for increases in divorce as women no longer felt as hopelessly trapped in unhappy/abusive marriages?); and the birth control pill for women was becoming more widely available at the same time as the “free love” movement was developing (maybe these account for increases in sexual activity and STD’s?).  In fact, we could create an enormously long list of other things that were happening at around the same time that the US saw increases in certain behaviors people don’t like.  Just because they are happening at the same time doesn’t mean one is causing any of the others.  Of course, there may, in fact, be a particular cause/effect relationship here.  Maybe eliminating prayer from public school did have the effect some people claim.  But without a REASON to believe this is true, there is no reason to believe it is.  Without a REASON, we can just as easily believe that it was some other cause.  If I told you that these social ills are the result of the fact that women were granted the legal right to have credit cards in their own names without the permission of their husbands (yes, this did, in fact, not happen until the 1960s), you’d be “doubtful”—and wisely so.  You’d say something like, “Hey, wait a minute.  How do you know credit cards caused STD increases?” And if I said, “Well, because they happened at around the same time,” you’d just snicker, knowing that correlation doesn’t mean causation.  We can easily see lots of these fallacies in the world around us:


Since Obama has come into office, unemployment has increased; Obama’s policies have caused more people to be unemployed.


A new study shows that 87% of overweight people watch between 4 and 6 hours of television per day, while 90% of normal-weight people watch less than 2 hours of television per day.  It looks like excessive TV watching leads to weight problems.


Children raised in families that eat dinner together perform 77% better in school that kids raised by families that do not sit down together for a nightly meal.  Not giving your kids the chance to eat together at night really hurts them academically. 


Again, perhaps these are true statements.  But as critical thinkers we’ll want to know WHY we should believe they are true—i.e. why we should believe one causes the other.  We’d need to see a direct cause/effect connection between Obama’s policies and unemployment.  Otherwise, what do we say to someone who claims the increases in unemployment are simply a continuation of the increases in unemployment that resulted from Bush’s policies or someone else who claims they are the result of a deepening recession caused not by policies but by an out-of-control Wall Street or someone else who says the increases in unemployment are the natural by-product of a retraction in the overgrown economy, etc. etc.  In other words, without logic to tie the two together, why would we believe one cause more than any other (unless it’s out of egocentrism or wishful thinking [see above] and it’s just what we want to believe because it serves some personal ideology that we value more than the truth).  And is it television causing weight gain or are there other lifestyle factors common to big TV watchers (e.g. poor diet, lack of physical activity) that causes obesity?  And is it the family meal that leads to higher academic performance or is it other factors common to families that eat together (e.g. stability, unity of schedules, a general valuing of family members) that gets the grades higher?  Think of it this way: if my kid’s grades suck, will they come up just because we sit down for burgers together every night? 



False Dilemma (Black/White Thinking or False Choice or Either/Or )


Imagine you’re arguing with someone about the economy, and he says the following to you:


Abuse of the welfare system is rampant nowadays. Our only alternative is to abolish the system altogether.


If I respond by arguing for either A) the need to accept a corrupt system or B) the need to get rid of the system entirely, I’ve taken the bait.  The person making the above statement is illogically  (and dishonestly) narrowing the range of options to two extremes (all or nothing) and ignoring a wide range of possibility between those two extremes.  They are essentially saying, “It is either this or that but not anything else.”  They are saying, “Here’s what you have to choose between, option A or option B…..there are no other options.”  The speaker is making it sound like the dilemma is between these two options only, but because we know the situation is much more complicated than that, we can easily see that the black/white choice they’ve put to us is faulty.  But if I respond by saying simply, “Hold on.  That’s a false dilemma,” I shouldn’t expect them to immediately back away from that statement apologetically and fess up to the error.  First of all, they may not know what a false dilemma is.  Second, they may know what it is, but they may know it not by that name but by another of its names (e.g. black/white thinking, false choice, either/or thinking, or false problem).  Third, even if they do know what a false dilemma is or even if I can educate them about what it is, they may not be able to see it in their own statement without my help. 


So the question is this: What can I do to help them see the error in their thinking?


And that is an important question—a very important question.


So here’s what I should do: I should begin by explaining briefly what a false dilemma is: a false dilemma (sometimes called black/white thinking or either/or thinking or false choice) ignores a range of reasonable possibilities for the sake of proposing just a few diametrically opposed options.  It’s a way of reducing or even eliminating the “gray areas” of an issue and oversimplifying it.  When someone says something is either totally good or totally evil, totally innocent or totally guilty, or it’s my way or the highway and you have no other option, they’re probably guilty of this fallacy.


That’s a good start down the road of getting my illogical friend to see his own illogic, but it’s entirely possible (probably likely) that the speaker won’t see this particular thinking error in his statement about the welfare system.  Some people have embraced a false dilemma for so long that it seems rock solid true (a case of egocentrism and sometimes absolutism [see above]).  America: love it or leave.  You’re either with us or against us.  Either some Americans die from lack of health insurance or we accept a fascist/socialist/Nazi/Hitler-like public health option.  So we’re going to have to pull out the logical fallacy from their statement and expose it by saying something like this:


You say I have to either accept a broken and corrupt welfare system or have none at all.  But there are a number of other reasonable options.  We could, for example, reform the current welfare system to reduce or eliminate that corruption.   


Now, what I’ve done is taken the logical fallacy definition and applied it to the statement to flesh out the error in the reasoning.  The only way for the person who said this to me to respond is to


a.       Provide evidence that no other options exist.

b.      Revise the statement to accommodate other options and, thereby, make it more logical.

c.       Stick to his guns and live happily in denial, believing that the issue is much simpler than it actually is.


If he chooses, option a or b, I’m willing to keep talking.  If he chooses option c, I’m not.  Why talk to someone who’s not a critical thinker and has more interest in forwarding an agenda than seeking the truth?


The point here is that HOW we arrive at conclusions matters.  We don’t want to be right for the wrong reasons, and we don’t want to embrace claims arrived at through broken logic.  A broken clock is right twice a day, but that doesn’t make the time it shows valuable (even when it is coincidentally the correct time).  And we don’t want to win arguments by deception.  It may be easier to get you to do the right thing by lying to you, but in most circumstances, I should avoid dishonoring you and myself by deceiving you.  HOW you get to the conclusion matters—it may matter even  more than the particular conclusion itself.  And no matter how much you want to believe in the truth of the conclusion,  you should do so only if you have reason to do so. 


So the goal is to expose the logic or illogic so that everyone can see it in the statement offered.  That’s the first step to searching for good ground to build our arguments on. 


Here it’s worth emphasizing an important distinction: There’s a difference between committing a logical fallacy and committing a factual error.  A factual error occurs when someone is just flat out wrong.  When Rush Limbaugh reported in October 2009 on his radio show that he had discovered Barack Obama master’s thesis from Columbia University and that in the thesis, Obama expressed disdain for the US Constitution, he was simply factually wrong.  The thesis wasn’t written by Obama.  It was a piece of satire produced by someone else.  When Limbaugh was compelled to acknowledge the thesis was a fake and hadn’t been written by Obama, he responded with illogic: “….we know he (Obama) thinks it. Good comedy, to be comedy, must contain an element of truth, and we know how he feels about distribution of wealth….So, I can say, I don't care if these quotes are made up.”  In a weird kind of logical gymnastics, Limbaugh offers the conclusion that the falseness of the evidence points to the truth of the conclusion and that whether something is real or “made up” doesn’t matter; it’s what he wants to believe that matters (an example of wishful thinking [see above]).  It’s the kind of illogic famously parodied in George Orwell’s 1984, where “slavery is freedom.”  In Limbaugh’s claim, falsity is truth.  We can’t really “disagree” or “agree” with the claim because it’s so illogical.  The only thing we can really do is accept it as it is in all of its illogical glory or reject it until logic is used.  That inability to engage the claim and disagree with it is a sign of a logic problem. 


So, again, how you arrive at your conclusion matters—tremendously. 



Hasty Generalization


Now imagine someone claims this:


The CNN meteorologist was wrong in predicting the amount of rain for May 2012.  Obviously, he’s not a reliable meteorologist.


Here, it’s valuable to remember the principles inductive reasoning.  In the above claim, the conclusion being drawn is a big one—we’re not judging the meteorologist’s performance for the month of May 2012; we’re judging his performance over the course of his career.  But basing such a big conclusion on evidence from just one month is hasty, and it’s what gives this fallacy its name: hasty generalization.  We need more months, and because weather is more dynamic in different months, we need a more representative sample.  So perhaps a sample that includes a fall, winter, spring, and summer month for the last three years would generate a more reliable conclusion.  And, again, perhaps it turns out the meteorologist is unreliable, but it’s not logical to draw that conclusion based only on his predictions for one month. 


The question for us is how do we get the speaker of this illogical statement to see his own illogic.  Including the general explanation above would be a good start, but it probably won’t be enough.  It may help to show how using such a small sample could lead to false conclusions.  Let’s say, for example, that the meteorologist was spot on correct in predicting rain for every single month from January 2005 to June 2012 with the exception of May 2012.  If so, it’s unfair to judge him as unreliable for having one wrong prediction out of more than forty.  We need to either revise the conclusion or gather a bigger sample. 


As you can probably see, stereotypes about race and sex are often rooted in hasty generalizations.  Someone sees a few karate movies and suddenly that person is asking anyone with Asian physical features if he knows karate.  It’s widely believed women are more emotional than men.  But based on what?  Our experience with a minuscule sampling of American women?  Are the women we’ve been in contact with who’ve led us to this conclusion sufficient in number?  Representative of women around the world in different economic, social, and cultural groups?  And what about the men we’re comparing them to?  Do we really have a big enough, verifiable sample to draw a conclusion about an entire sex?  Or do we do so because our society allows for this conclusion since it is in  keeping with solidly established cultural assumptions about gender (sociocentrism [see above])?  Because it’s what we’ve always believed (egocentrism [see above])?  Because it’s what we want to believe is true (wishful thinking [see above])?


Of course, the difficulty with hasty generalizations is knowing when one has actually occurred.  After all, we’re going to have to draw conclusions about Americans and Iraqis and Costco shoppers and college students, etc without gathering all Americans, Iraqis, Costco shoppers, and college students in our samples.  This means you have to make decisions about what you are willing to believe.  If I base a conclusion about Costco shoppers with only 10 Costco shoppers in my sample, is that enough?  What if it’s a 100?  A 1000?  What if it’s 10,000 shoppers but they are all from the greater Los Angeles area?  Is that representative enough for you to accept a conclusion about Costco shoppers nationwide?  And what if I can’t actually show you my data or my sample?  Are you willing to take my word for it when I simply tell you what questions I asked them or what I observed without showing you any evidence that this is what I asked and what they said?  You are going to have to choose to believe throughout your life.  How will you choose?  Will you believe anything?  Nothing?  Only the things you want to believe?  Only the things you’re told to believe?  Or the things you have reason to believe?  
Final Thoughts


Let’s revisit some of the definitions we began this discussion with.  We said that a critical thinker is one who seeks to deepen his/her knowledge and understanding of the world.  That implies that a critical thinker has to be able to accept some things as true or merited while rejecting other things as untrue or unmerited.  The true part is easy: we just won’t accept falsehoods, and the only thing we have to possess to avoid accepting falsehoods is good knowledge.  The “merited” part is harder.  This means that we accept a claim when there is reason to do so, and this implies that there are standards for judging what is worth our acceptance and what isn’t.  The purpose of this discussion is to assert one of those standards: we should accept logical claims but not illogical ones.  Simple enough, right?  However, someone new to all this logic and logical fallacy stuff may be tempted to simply dismiss everything.  After all, no argument is perfect, and the easiest way to avoid accepting something illogical is to avoid accepting anything at all.  But this won’t work.  We have to know what to believe and disbelieve in order to make decisions.  We have to vote for representatives to our local, state, and national governments; we have to buy food; we have to make choices at work.  And good decisions depend on good knowledge and logic. 


So the bottom line:


In order to deepening our knowledge and understanding of the world, we have to accept some claims and reject others.


We should accept logical claims and reject illogical ones.


We have to be able to expose the illogic of illogical claims so that people understand why we are not accepting their claims.