Motivated Reasoning 

When we identify too strongly with a deeply held belief, idea, or outcome, a plethora of cognitive biases can rear their ugly heads. Take confirmation bias, for example. This is our inclination to eagerly accept any information that confirms our opinion, and undervalue anything that contradicts it. It’s remarkably easy to spot in other people (especially those you don’t agree with politically), but extremely hard to spot in ourselves because the biasing happens unconsciously. But it’s always there. 

Criminal cases where jurors unconsciously ignore exonerating evidence and send an innocent person to jail because of a bad experience with someone of the defendant’s demographic. The growing inability to hear alternative arguments in good faith from other parts of the political spectrum. Conspiracy theorists swallowing any unconventional belief they can get their hands 

We all have some deeply held belief that immediately puts us on the defensive. Defensiveness doesn’t mean that belief is actually incorrect. But it does mean we’re vulnerable to bad reasoning around it. And if you can learn to identify the emotional warning signs in yourself, you stand a better chance of evaluating the other side’s evidence or arguments more objectively.

Liv Boeree writing in Vox    

Orange Buttons are the Best

An appeal to authority is a false claim that something must be true because an authority on the subject believes it to be true. It is possible for an expert to be wrong, we need to understand their reasoning or research before we appeal to their findings. In a design meeting you might hear something like this:

“Amazon is a successful website. Amazon has orange buttons. So orange buttons are the best.”

Feel free to switch out ‘Amazon’ and ‘orange buttons’ for anything you want; you get an equally week argument. We could argue back that Amazon is surviving on past success and that larger company are often hard to innovate so shouldn’t be used as a design influence. We could point out that Jeff Bezos has a reputation for micro-managing and ignoring the evidence provided by usability experts he has hired. As a result, we could point out that Amazon is possibly successful in spite of its design not because of it. But the words ‘often’, ‘reputation’ and ‘possibly’ make all these arguments equally week and full of fallacies.

When we counter any logical fallacy, we want to do it as cleanly as possible. In the above example, we only need to point out that many successful websites don’t have orange buttons and many unsuccessful sites do have orange buttons. Then we can move away from the matter entirely unless there is some research or reason available to explain the authorities decision.

Rob Sutcliffe writing in Prototypr

The Madman's Narrative

Consider that two people can hold incompatible beliefs based on the exact same data. Does this mean that there are possible families of explanations and that each of these can be equally perfect and sound? Certainly not. One may have a million ways to explain things, but the true explanation is unique, whether or not it is within our reach.

In a famous argument, the logician WV Quine showed that there exist families of logically consistent interpretations and theories that can match a give series of facts. Such insight should warn us that mere absence of nonsense may not be sufficient to make something true.

Nissim Taleb, The Black Swain

Our Favorite Conclusions

It would be unfair for teachers to give the students they like easier exams than those they dislike, for federal regulators to require that foreign products pass sticker safety tests than domestic products, or for judges to insist that the defense attorney make better arguments than the prosecutor.

And yet, this is just the sort of uneven treatment most of us give to facts that confirm and disconfirm our favored conclusions.

For example, volunteers in one study were told that they had performed very well or very poorly on a social-sensitivity test and were then asked to assess two scientific reports—one that suggested the test was valid and one that suggested it was not. Volunteers who had performed well on the test believed that the studies in the validating report used sounder scientific methods than did the studies in the invalidating report, but volunteers who performed poorly on the test believed precisely the opposite.

To ensure that our views are credible, our brain accepts what our eye sees. To ensure that our views are positive, our eye looks for what our brain wants.

Daniel Gilbert, Stumbling on Happiness

Finding our Preferred Facts

Most of us have ways of making other people confirm our favored conclusions without ever engaging them in conversation. Consider this: To be a great driver, lover, or chef, we don’t need to be able to parallel park while blindfolded, make ten thousand maidens swoon with a single pucker, or create a pâte feuilletée so intoxicating that the entire population of France instantly abandons its national cuisine and swears allegiance to our kitchen. Rather, we simply need to park, kiss, and bake better than most other folks do. How do we know how well most other folks do? Why, we look around, of course—but in order to make sure than we see what we want to see, we look around selectively.

For example, volunteers in one study took a test that ostensibly measured their social sensitivity and were then told that they had flubbed the majority of the questions. When these volunteers were then given an opportunity to look over the test results of other people who had performed better or worse than they had, they ignored the test of the people who had done better and instead spent their time looking over the tests of the people who had done worse.

The bottom line is this: The brain and the eye may have a contractual relationship in which the brain has agreed to believe what the eye sees, but in return the eye has agreed to look for what the brain wants.

Daniel Gilbert, Stumbling on Happiness

what exactly IS "critical thinking"?

Critical thinking entails at least ten reasoning abilities and habits of thought:

1. Consciously raising the questions “What do we know. . . ? How do we know . . . ? Why do we accept or believe. . . ? What is the evidence for. . . ?” when studying some body of material or approaching a problem.

2. Being clearly and explicitly aware of gaps in available information. Recognizing when a conclusion is reached or a decision made in absence of complete information and being able to tolerate the ambiguity and uncertainty. Recognizing when one is taking something on faith without having examined the “How do we know. . . ? Why do we believe. . . ?” questions.

3. Discriminating between observation and inference, between established fact and subsequent conjecture.

4. Recognizing that words are symbols for ideas and not the ideas themselves. Recognizing the necessity of using only words of prior definition, rooted in shared experience, in forming a new definition and in avoiding being misled by technical jargon.

5. Probing for assumption (particularly the implicit, unarticulated assumptions) behind a line of reasoning.

6. Drawing inferences from data, observations, or other evidence and recognizing when firm inferences cannot be drawn. This subsumes a number of processes such as elementary syllogistic reasoning (e.g., dealing with basic prepositional "if. . .then" statements), correlational reasoning, recognizing when relevant variables have or have not been controlled.

7. Performing hypothetico-deductive reasoning; that is, given a particular situation, applying relevant knowledge of principles and constraints and visualizing, in the abstract, the plausible outcomes that might result from various changes one can imagine to be imposed on the system.

8. Discriminating between inductive and deductive reasoning; that is, being aware when an argument is being made from the particular to the general or from the general to the particular.

9. Testing one's own line of reasoning and conclusions for internal consistency and thus developing intellectual self-reliance.

10. Developing self-consciousness concerning one's own thinking and reasoning processes.

Physicist Arnold Arons, Teaching Introductory Physics 

Take the Risk Test

The Cognitive Reflection Test (CRT) is a set of three simple questions designed to predict whether you will be good at things like managing money. Each question has an intuitive –and wrong –response. Most people need a moment to get the right answer.

Here are the three questions:

1. A bat and a ball together cost $1.10. The bat costs $1.00 more than the ball. How much does the ball cost?

2. If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?

3. In a lake, there is a patch of lily pads. Everyday, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half the lake?

Shane Frederick, assistant professor at Massachusetts Institute of Technology's Sloan School of Management, devised the test to assess the specific cognitive ability that relates to decision‐making. It has an amazing correlation with people’s ability to evaluate risky propositions and to sort out the time value of money. (A dollar today is worth more than a dollar in the future because today’s dollar ears interest.)

For example, those who incorrectly answered the first question thought that 92% of people would answer it correctly. Those that did answer it correctly thought that 62% would get it right. The ones who answered instinctively, and therefore incorrectly ‐ have an over‐inflated sense of confidence, misreading the difficulty of challenges. We have a natural tendency toward overconfidence and bias. Researchers say people consistently overrate their knowledge and skill.

Consider these two alternatives: Would you rather receive $3,400 this month or $3,800 next month?

The second choice is better. It is the same as getting 12% interest in only a single month. Of the people who got all three questions right on the CRT, 60% preferred to wait a month. Of the people who got all three questions wrong on the CRT, only 35% preferred to wait.

In other words, people with higher scores indicated a greater tolerance for risk when the odds where in their favor.

This is shown by another option offered to participants. People were asked which they would prefer, $500 for sure or a gamble in which there was a 15% chance of receiving one million dollars and an 85% chance of receiving nothing

Most of the people who scored zero on the CRT took the money while most of those who scored a perfect three on the test took the gamble. The later group instinctively understood the concept of the expected value, which is the sum of possible values, multiplied by its probability (15% of $1 million plus 85% of zero equals $150,000). The gamble is well-worth the $500, but many people don’t naturally see it that way. They are basing their decisions on emotions rather than logic.

This can be seen in what’s known as the Prospect Theory developed by two psychologists, Daniel Kahneman and Amos Tversky. They found that people take greater risks to avoid losses than they do to earn profits because the pain of losing is greater than the joy of winning. That’s why people hang onto stocks and other investments (including people) when they should have let go of them long ago.

Oh, and the answers to the CRT?

1. Five cents, not ten cents.

2. 5 minutes, not 100 minutes.

3. 47 days, not 24 days.

Read the original study here.