The Base Rate Fallacy can get you in trouble

The Base Rate Fallacy comes into play when someone comes to a conclusion without considering all the relevant information. There’s a tendency to over-estimate the value of new information out of context. Consider also, an accurate test is not necessarily a very predictive test. And some facts are provably true but nevertheless can feel false when phrased a certain way. These factors can lead someone to hold misconceptions about medical tests and other data actually mean.

The Green Lumber Fallacy

A fellow made in fortune in green lumber without knowing what appears to be essential details about the product he traded—he wasn’t aware that green lumber stood for freshly cut wood, not lumber that was painted green.  

Meanwhile, by contrast, the person who related the story went bankrupt while knowing every intimate detail about the green lumber, which includes the physical, economic, and other aspects of the commodity.

The fallacy is that what one may need to know in the real world does not necessarily match what one can perceive through intellect: it doesn’t mean that details are not relevant, only that those we tend to believe are important constitute a distraction away from more central attributes to the price mechanism.  

Nassim Nicholas Taleb

 

Our Favorite Conclusions

It would be unfair for teachers to give the students they like easier exams than those they dislike, for federal regulators to require that foreign products pass sticker safety tests than domestic products, or for judges to insist that the defense attorney make better arguments than the prosecutor.

And yet, this is just the sort of uneven treatment most of us give to facts that confirm and disconfirm our favored conclusions.

For example, volunteers in one study were told that they had performed very well or very poorly on a social-sensitivity test and were then asked to assess two scientific reports—one that suggested the test was valid and one that suggested it was not. Volunteers who had performed well on the test believed that the studies in the validating report used sounder scientific methods than did the studies in the invalidating report, but volunteers who performed poorly on the test believed precisely the opposite.

To ensure that our views are credible, our brain accepts what our eye sees. To ensure that our views are positive, our eye looks for what our brain wants.

Daniel Gilbert, Stumbling on Happiness

False equivalence

False equivalency means that you think (or are told) two things should have equal weight in your decision-making. If one opinion has solid data supporting it, but the other opinion is conjecture, they are not equivalent in quality. 

False equivalence leads people to believe two separate things are equally bad, or equally good. A look into how damaging this thought process is can be found in Isaac Asminov's article, "The Relativity of Wrong." Asminov wrote, "When people thought the earth was flat, they were wrong. When people thought the Earth was spherical they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together. The basic trouble, you see, is that people think that ‘right’ and ‘wrong are absolute; that everything that isn't perfectly and completely right is totally and equally wrong.”

Stephanie Sarkis writing in Forbes

The Backfire effect 

Once something is added to your collection of beliefs, you protect it from harm. You do it instinctively and unconsciously when confronted with attitude-inconsistent information. Just as confirmation bias shields you when you actively seek information, the backfire effect defends you when the information seeks you, when it blindsides you. Coming or going, you stick to your beliefs instead of questioning them. When someone tries to correct you, tries to dilute your misconceptions, it backfires and strengthens them instead. Over time, the backfire effect helps make you less skeptical of those things which allow you to continue seeing your beliefs and attitudes as true and proper.

David McRaney  

Motivated reasoning

Motivated reasoning is thinking through a topic with the aim, conscious or unconscious, of reaching a particular kind of conclusion. In a football game, we see the fouls committed by the other team but overlook the sins of our own side. We are more likely to notice what we want to notice. Experts are not immune to motivated reasoning. Under some circumstances their expertise can even become a disadvantage. 

People with deeper expertise are better equipped to spot deception, but if they fall into the trap of motivated reasoning, they are able to muster more reasons to believe whatever they really wish to believe.

Tim Harford, How to Make the World Add Up

False Causality

We are always in search of patterns. The tendency means that sometimes we even find patterns where none really even exist. Our brains are so trained in this way that we will even make sense of chaos to the extent that we can.

Because our training wires us to seek out patterns, it’s crucial to remember the simple maxim that correlation does not imply causation. Just because two variables move in tandem doesn’t necessarily mean that one causes the other.

This principle has been hilariously demonstrated by numerous examples. For instance, by looking at fire department data, you notice that, as more firemen are dispatched to a fire, the more damage is ultimately done to a property. Thus, you might infer that more firemen are causing more damage. In another famous example, an academic who was investigating the cause of crime in New York City in the 1980s found a strong correlation between the number of serious crimes committed and the amount of ice cream sold by street vendors. But should we conclude that eating ice cream drives people to crime? Since this makes little sense, we should obviously suspect that there was an unobserved variable causing both. During the summer, crime rates are the highest, and this is also when most ice cream is sold. Ice cream sales don’t cause crime, nor does crime increase ice cream sales. In both of these instances, looking at the data too superficially leads to incorrect assumptions.

Rahul Agarwal writing in Built in

 

Bullet-riddled Fighter Planes

During World War II, researchers from the non-profit research group the Center for Naval Analyses were tasked with a problem. They needed to reinforce the military’s fighter planes at their weakest spots. To accomplish this, they turned to data. They examined every plane that came back from a combat mission and made note of where bullets had hit the aircraft. Based on that information, they recommended that the planes be reinforced at those precise spots.

Do you see any problems with this approach?

The problem, of course, was that they only looked at the planes that returned and not at the planes that didn’t. Of course, data from the planes that had been shot down would almost certainly have been much more useful in determining where fatal damage to a plane was likely to have occurred, as those were the ones that suffered catastrophic damage.

The research team suffered from survivorship bias: they just looked at the data that was available to them without analyzing the larger situation. This is a form of selection bias in which we implicitly filter data based on some arbitrary criteria and then try to make sense out of it without realizing or acknowledging that we’re working with incomplete data.

Rahul Agarwal writing in Built in

Availability Bias

Have you ever said something like, “I know that [insert a generic statement here] because [insert one single example].” For example, someone might say, “You can’t get fat from drinking beer, because Bob drinks a lot of it, and he’s thin.” If you have, then you’ve suffered from availability bias. You are trying to make sense of the world with limited data.

People naturally tend to base decisions on information that is already available to us or things we hear about often without looking at alternatives that might be useful. As a result, we limit ourselves to a very specific subset of information.

This happens often in the data science world. Data scientists tend to get and work on data that’s easier to obtain rather than looking for data that is harder to gather but might be more useful. We make do with models that we understand and that are available to us in a neat package rather than something more suitable for the problem at hand but much more difficult to come by.

A way to overcome availability bias in data science is to broaden our horizons. Commit to lifelong learning. Read. A lot. About everything. Then read some more. Meet new people. Discuss your work with other data scientists at work or in online forums. Be more open to suggestions about changes that you may have to take in your approach. By opening yourself up to new information and ideas, you can make sure that you’re less likely to work with incomplete information.

Rahul Agarwal writing in Built in

 

Loss Aversion

People hate losses.. Roughly speaking, losing something makes you twice as miserable as gaining the same thing makes you happy. In more technical language, people are “loss averse.” How do we know this?

Consider a simple experiment. Half the students in a class are given coffee mugs with the insignia of their home university embossed on it. The students who did not get a mug are asked to examine their neighbor’s mugs. The n mug owners are invited to sell their mugs and nonowners are invited to busy them. They do so by answering the question “At each of the following prices, indicate whether you would be willing to (give up your mug/buy a mug).” 

The results show that those with mugs demand roughly twice as much to give up their mugs as others are willing to pay to get one. Thousands of mugs have been used in dozens of replications of this experiment, but the results are nearly always the same. Once I have a mug, I don’t want to give it up. But if I don’t have one, I don’t feel an urgent need to buy one.  

What this means is that people do not assign specific values to objects. When they have to give something up, they are hurt more than they are pleased if they acquire the very same things.

Richard Thaler & Cass Sunstein, Nudge

Extraordinary claims (require extraordinary evidence)

For some people, the less likely an explanation, the more likely they are to believe it. Take flat-Earth believers. Their claim rests on the idea that all the pilots, astronomers, geologists, physicists, and GPS engineers in the world are intentionally coordinating to mislead the public about the shape of the planet. From a prior odds perspective, the likelihood of a plot so enormous and intricate coming together out of all other conceivable possibilities is vanishingly small. But bizarrely, any demonstration of counterevidence, no matter how strong, just seems to cement their worldview further.

Liv Boeree writing in Vox   

Motivated Reasoning 

When we identify too strongly with a deeply held belief, idea, or outcome, a plethora of cognitive biases can rear their ugly heads. Take confirmation bias, for example. This is our inclination to eagerly accept any information that confirms our opinion, and undervalue anything that contradicts it. It’s remarkably easy to spot in other people (especially those you don’t agree with politically), but extremely hard to spot in ourselves because the biasing happens unconsciously. But it’s always there. 

Criminal cases where jurors unconsciously ignore exonerating evidence and send an innocent person to jail because of a bad experience with someone of the defendant’s demographic. The growing inability to hear alternative arguments in good faith from other parts of the political spectrum. Conspiracy theorists swallowing any unconventional belief they can get their hands 

We all have some deeply held belief that immediately puts us on the defensive. Defensiveness doesn’t mean that belief is actually incorrect. But it does mean we’re vulnerable to bad reasoning around it. And if you can learn to identify the emotional warning signs in yourself, you stand a better chance of evaluating the other side’s evidence or arguments more objectively.

Liv Boeree writing in Vox    

Orange Buttons are the Best

An appeal to authority is a false claim that something must be true because an authority on the subject believes it to be true. It is possible for an expert to be wrong, we need to understand their reasoning or research before we appeal to their findings. In a design meeting you might hear something like this:

“Amazon is a successful website. Amazon has orange buttons. So orange buttons are the best.”

Feel free to switch out ‘Amazon’ and ‘orange buttons’ for anything you want; you get an equally week argument. We could argue back that Amazon is surviving on past success and that larger company are often hard to innovate so shouldn’t be used as a design influence. We could point out that Jeff Bezos has a reputation for micro-managing and ignoring the evidence provided by usability experts he has hired. As a result, we could point out that Amazon is possibly successful in spite of its design not because of it. But the words ‘often’, ‘reputation’ and ‘possibly’ make all these arguments equally week and full of fallacies.

When we counter any logical fallacy, we want to do it as cleanly as possible. In the above example, we only need to point out that many successful websites don’t have orange buttons and many unsuccessful sites do have orange buttons. Then we can move away from the matter entirely unless there is some research or reason available to explain the authorities decision.

Rob Sutcliffe writing in Prototypr

The Madman's Narrative

Consider that two people can hold incompatible beliefs based on the exact same data. Does this mean that there are possible families of explanations and that each of these can be equally perfect and sound? Certainly not. One may have a million ways to explain things, but the true explanation is unique, whether or not it is within our reach.

In a famous argument, the logician WV Quine showed that there exist families of logically consistent interpretations and theories that can match a give series of facts. Such insight should warn us that mere absence of nonsense may not be sufficient to make something true.

Nissim Taleb, The Black Swain