LLMs Reflect Western Cultural Value

It should not come as a surprise that a growing body of studies shows how LLMs predominantly reflect Western cultural values and epistemologies. They overrepresent certain dominant groups in their outputs, reinforce and amplify the biases held by these groups, and are more factually accurate on topics associated with North America and Europe. - Deepak Varuvel Dennisonis

15 Articles about AI & Facial Recognition

8 insightful quotes about AI Bias

In an analysis of thousands of images created by Stable Diffusion, we found that image sets generated for every high-paying job were dominated by subjects with lighter skin tones, while subjects with darker skin tones were more commonly generated by prompts like “fast-food worker” and “social worker.” Most occupations in the dataset were dominated by men, except for low-paying jobs like housekeeper and cashier. Bloomberg

Eight years ago, Google disabled its A.I. program’s ability to let people search for gorillas and monkeys through its Photos app because the algorithm was incorrectly sorting Black people into those categories. As recently as May of this year, the issue still had not been fixed. Two former employees who worked on the technology told The New York Times that Google had not trained the A.I. system with enough images of Black people. New York Times

MIT student Rona Wang asked an AI image creator app called Playground AI to make a photo of her look "professional." It gave her paler skin and blue eyes, and "made me look Caucasian." Boston Globe 

We have things like recidivism algorithms that are racially biased. Even soap dispensers that don’t read darker skin. Smartwatches and other health sensors don’t work as well for darker skin. Things like selfie sticks that are supposed to track your image don’t work that well for people with darker skin because image recognition in general is biased. The Markup

AI text may be biased toward established scientific ideas and hypotheses contained in the content on which the algorithms were trained. Science.org

No doubt AI-powered writing tools have shortcomings. But their presence offers educators an on-ramp to discussions about linguistic diversity and bias. Such discussions may be especially critical on U.S. campuses. Inside Higher Ed

Major companies behind A.I. image generators — including OpenAI, Stability AI and Midjourney — have pledged to improve their tools. “Bias is an important, industrywide problem,” Alex Beck, a spokeswoman for OpenAI, said in an email interview. She declined to say how many employees were working on racial bias, or how much money the company had allocated toward the problem. New York Times

As AI models become more advanced, the images they create are increasingly difficult to distinguish from actual photos, making it hard to know what’s real. If these images depicting amplified stereotypes of race and gender find their way back into future models as training data, next generation text-to-image AI models could become even more biased, creating a snowball effect of compounding bias with potentially wide implications for society. Bloomberg

Making people confirm our favored conclusions

Most of us have ways of making other people confirm our favored conclusions without ever engaging them in conversation. Consider this: To be a great driver, lover, or chef, we don’t need to be able to parallel park while blindfolded, make ten thousand maidens swoon with a single pucker, or create a pâte feuilletée so intoxicating that the entire population of France instantly abandons its national cuisine and swears allegiance to our kitchen. Rather, we simply need to park, kiss, and bake better than most other folks do. How do we know how well most other folks do? Why, we look around, of course—but in order to make sure than we see what we want to see, we look around selectively.

For example, volunteers in one study took a test that ostensibly measured their social sensitivity and were then told that they had flubbed the majority of the questions. When these volunteers were then given an opportunity to look over the test results of other people who had performed better or worse than they had, they ignored the test of the people who had done better and instead spent their time looking over the tests of the people who had done worse.

The bottom line is this: The brain and the eye may have a contractual relationship in which the brain has agreed to believe what the eye sees, but in return the eye has agreed to look for what the brain wants.

Daniel Gilbert, Stumbling on Happiness

Entrenched Opinions

While lack of knowledge is certainly a major source of bias, professional expertise doesn’t fare much better. Whether we are looking at judges, lawyers, professors, scientists, doctors, engineers, architects, writers, journalists, politicians, investors, economists, managers, coaches, consultants, or computer programmers, sharp differences and entrenched opinions are the norm. Deep experience and expertise do not necessarily lead to objective consensus. As behavioral scientists have long noted, subject matter experts tend to:

1.    Rely too much on societal and professional stereotypes

2.    Overvalue their personal experiences, especially recent ones

3.    Overvalue their personal gut feel

4.    Prefer anecdotes that confirm their existing views

5.    Have limited knowledge of statistics and probability

6.    Resist admitting mistakes

7.    Struggle to keep up with the skills and literature in their fields

8.    Burn out and/or make mistakes in demanding work environments

9.    Avoid criticizing, evaluating, or disciplining their peers

10. Become less open-minded over time

For decades, we have seen the unfortunate results of these traits in criminal sentencing, student grading, medical diagnoses and treatments, hiring and salary negotiations, financial services, editorial coverage, athletic evaluations, political processes, and many other areas.

We may think that we are being impartial and fair, but our minds are full of stereotypes, preconceptions, self-interests, confirmation biases, and other discriminatory forces.

David Moschella writing for the Information Technology & Innovation Foundation

The algorithmic feedback loop

Users keep encountering similar content because the algorithms keep recommending it to us. As this feedback loop continues, no new information is added; the algorithm is designed to recommend content that affirms what it construes as your taste.

Reduced to component parts, culture can now be recombined and optimized to drive user engagement. This threatens to starve culture of the resources to generate new ideas, new possibilities. 

If you want to freeze culture, the first step is to reduce it to data. And if you want to maintain the frozen status quo, algorithms trained on people’s past behaviors and tastes would be the best tools.

The goal of a recommendation algorithm isn’t to surprise or shock but to affirm. The process looks a lot like prediction, but it’s merely repetition. The result is more of the same: a present that looks like the past and a future that isn’t one. 

Grafton Tanner, writing in Real Life Magazine

Bias in the Judicial System

When it comes to bail, for instance, you might hope the judges were able to look at the whole case together, carefully balancing all the pros and cons before coming to a decision. But unfortunately, the evidence says otherwise. Instead, psychologists have shown that judges are doing nothing more strategic than going through an ordered checklist of warning flags in their heads. If any of those flags — past convictions, community ties, prosecution's request — are raised by the defendant story, the judge will stop and deny bail. 

The problem is that so many of those flags are correlated with race, gender and educational level. Judges can’t help relying on intuition more than they should; and in doing so, they are unwittingly perpetuating biases in the system. 

Hannah Fry, Hello World

Availability bias

People give their own memories and experiences more credence than they deserve, making it hard to accept new ideas and theories. Psychologists call this quirk the availability bias. It’s a useful built-in shortcut when you need to make quick decisions and don’t have time to critically analyze lots of data, but it messes with your fact-checking skills.

Marc Zimmer writing in The Conversation

Exponential growth bias

Imagine you are offered a deal with your bank, where your money doubles every three days. If you invest just $1 today, roughly how long will it take for you to become a millionaire? Would it be a year? Six months? 100 days? The precise answer is 60 days from your initial investment, when your balance would be exactly $1,048,576. Within a further 30 days, you’d have earnt more than a billion. And by the end of the year, you’d have more than $1,000,000,000,000,000,000,000,000,000,000,000,000 – an “undecillion” dollars.  

If your estimates were way out, you are not alone. Many people consistently underestimate how fast the value increases – a mistake known as the “exponential growth bias.”   

David Robson writing for the BBC

Verification bias

Verification bias refers to a stubborn resistance to accepting the null hypothesis – the assumption that there is no inherent relationship between the variables being studied. The null hypothesis is the default position in experiments. This is what the researcher is attempting to eliminate through experimental investigation. For example, continuing to repeat an experiment until it “works” as desired, or excluding inconvenient cases or results may make the hypothesis immune to the facts. Verification bias amounts to the repression of negative results. 

Augustine Brannigan, The Use and Misuse of the Experimental Method in Social Psychology

What the Bathroom scales can tell you

When our bathroom scale delivers bad news, we hop off and then on again, just to make sure we didn’t misread the display or put too much pressure on one foot. When our scale delivers good news, we smile and head for the shower. By uncritically accepting evidence when it pleases us, and insisting on more when it doesn’t, we subtly tip the scales in our favor. 

Psychologist Dan Gilbert in The New York Times

the strongest political bias of all

The strongest bias in American politics is not a liberal bias or a conservative bias; it is a confirmation bias, or the urge to believe only things that confirm what you already believe to be true. Not only do we tend to seek out and remember information that reaffirms what we already believe, but there is also a “backfire effect,” which sees people doubling down on their beliefs after being presented with evidence that contradicts them. So, where do we go from here? There’s no simple answer, but the only way people will start rejecting falsehoods being fed to them is by confronting uncomfortable truths.

Emma Roller writing in the New York Times

Bullet-riddled Fighter Planes

During World War II, researchers from the non-profit research group the Center for Naval Analyses were tasked with a problem. They needed to reinforce the military’s fighter planes at their weakest spots. To accomplish this, they turned to data. They examined every plane that came back from a combat mission and made note of where bullets had hit the aircraft. Based on that information, they recommended that the planes be reinforced at those precise spots.

Do you see any problems with this approach?

The problem, of course, was that they only looked at the planes that returned and not at the planes that didn’t. Of course, data from the planes that had been shot down would almost certainly have been much more useful in determining where fatal damage to a plane was likely to have occurred, as those were the ones that suffered catastrophic damage.

The research team suffered from survivorship bias: they just looked at the data that was available to them without analyzing the larger situation. This is a form of selection bias in which we implicitly filter data based on some arbitrary criteria and then try to make sense out of it without realizing or acknowledging that we’re working with incomplete data.

Rahul Agarwal writing in Built in

Availability Bias

Have you ever said something like, “I know that [insert a generic statement here] because [insert one single example].” For example, someone might say, “You can’t get fat from drinking beer, because Bob drinks a lot of it, and he’s thin.” If you have, then you’ve suffered from availability bias. You are trying to make sense of the world with limited data.

People naturally tend to base decisions on information that is already available to us or things we hear about often without looking at alternatives that might be useful. As a result, we limit ourselves to a very specific subset of information.

This happens often in the data science world. Data scientists tend to get and work on data that’s easier to obtain rather than looking for data that is harder to gather but might be more useful. We make do with models that we understand and that are available to us in a neat package rather than something more suitable for the problem at hand but much more difficult to come by.

A way to overcome availability bias in data science is to broaden our horizons. Commit to lifelong learning. Read. A lot. About everything. Then read some more. Meet new people. Discuss your work with other data scientists at work or in online forums. Be more open to suggestions about changes that you may have to take in your approach. By opening yourself up to new information and ideas, you can make sure that you’re less likely to work with incomplete information.

Rahul Agarwal writing in Built in

 

Motivated Reasoning 

When we identify too strongly with a deeply held belief, idea, or outcome, a plethora of cognitive biases can rear their ugly heads. Take confirmation bias, for example. This is our inclination to eagerly accept any information that confirms our opinion, and undervalue anything that contradicts it. It’s remarkably easy to spot in other people (especially those you don’t agree with politically), but extremely hard to spot in ourselves because the biasing happens unconsciously. But it’s always there. 

Criminal cases where jurors unconsciously ignore exonerating evidence and send an innocent person to jail because of a bad experience with someone of the defendant’s demographic. The growing inability to hear alternative arguments in good faith from other parts of the political spectrum. Conspiracy theorists swallowing any unconventional belief they can get their hands 

We all have some deeply held belief that immediately puts us on the defensive. Defensiveness doesn’t mean that belief is actually incorrect. But it does mean we’re vulnerable to bad reasoning around it. And if you can learn to identify the emotional warning signs in yourself, you stand a better chance of evaluating the other side’s evidence or arguments more objectively.

Liv Boeree writing in Vox    

We’re hardwired to delude ourselves

When people hear the word bias, many if not most will think of either racial prejudice or news organizations that slant their coverage to favor one political position over another. Present bias, by contrast, is an example of cognitive bias—the collection of faulty ways of thinking that is apparently hardwired into the human brain. 

If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view. Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow.

Ben Yagoda writing in The Atlantic 

a mental short-cut that can lead us away from truth

Imagine I tell you that a group of 30 engineers and 70 lawyers have applied for a job. I show you a single application that reveals a person who is great at math and bad with people, a person who loves Star Wars and hates public speaking, and then I ask whether it is more likely that this person is an engineer or a lawyer. What is your initial, gut reaction? What seems like the right answer?

Statistically speaking, it is more likely the applicant is a lawyer. But if you are like most people in their research, you ignored the odds when checking your gut. You tossed the numbers out the window. So what if there is a 70 percent chance this person is a lawyer? That doesn’t feel like the right answer.

That’s what a heuristic is, a simple rule that in the currency of mental processes trades accuracy for speed. A heuristic can lead to a bias, and your biases, though often correct and harmless, can be dangerous when in error, resulting in a wide variety of bad outcomes from foggy morning car crashes to unconscious prejudices in job interviews.

David McRaney writing in BoingBoing