8 insightful quotes about AI Bias

In an analysis of thousands of images created by Stable Diffusion, we found that image sets generated for every high-paying job were dominated by subjects with lighter skin tones, while subjects with darker skin tones were more commonly generated by prompts like “fast-food worker” and “social worker.” Most occupations in the dataset were dominated by men, except for low-paying jobs like housekeeper and cashier. Bloomberg

Eight years ago, Google disabled its A.I. program’s ability to let people search for gorillas and monkeys through its Photos app because the algorithm was incorrectly sorting Black people into those categories. As recently as May of this year, the issue still had not been fixed. Two former employees who worked on the technology told The New York Times that Google had not trained the A.I. system with enough images of Black people. New York Times

MIT student Rona Wang asked an AI image creator app called Playground AI to make a photo of her look "professional." It gave her paler skin and blue eyes, and "made me look Caucasian." Boston Globe 

We have things like recidivism algorithms that are racially biased. Even soap dispensers that don’t read darker skin. Smartwatches and other health sensors don’t work as well for darker skin. Things like selfie sticks that are supposed to track your image don’t work that well for people with darker skin because image recognition in general is biased. The Markup

AI text may be biased toward established scientific ideas and hypotheses contained in the content on which the algorithms were trained. Science.org

No doubt AI-powered writing tools have shortcomings. But their presence offers educators an on-ramp to discussions about linguistic diversity and bias. Such discussions may be especially critical on U.S. campuses. Inside Higher Ed

Major companies behind A.I. image generators — including OpenAI, Stability AI and Midjourney — have pledged to improve their tools. “Bias is an important, industrywide problem,” Alex Beck, a spokeswoman for OpenAI, said in an email interview. She declined to say how many employees were working on racial bias, or how much money the company had allocated toward the problem. New York Times

As AI models become more advanced, the images they create are increasingly difficult to distinguish from actual photos, making it hard to know what’s real. If these images depicting amplified stereotypes of race and gender find their way back into future models as training data, next generation text-to-image AI models could become even more biased, creating a snowball effect of compounding bias with potentially wide implications for society. Bloomberg