All-or-nothing thinking

I spend days at a time in bed, staring at the ceiling and thinking of all the things I could be doing but can’t because I know I would do them imperfectly. I lose countless hours to inner monologues filled with self-hatred and all-or-nothing thinking. I don’t read anything, instead preferring to slowly crush myself with the existential weight of knowing that I will never be able to read all the things.

For a very long time, I thought that I did this because I was lazy. I figured that if I just worked a little harder, tried a little more, then I would be able to accomplish the things I set out to do. Failing to do them was a failure of my character. It was because I was a bad person, or at least bad at being a person.

I told myself that I had to get my act together; I had to do all of these things so that I could prove I wasn’t the worthless piece of garbage I thought I was. When I inevitably cracked under that pressure, I took it as proof that I was a worthless piece of garbage.

If all of this sounds repetitive, that’s because it is. It’s a vicious, repetitive, monotonous cycle. It moves at breakneck speed, but also not at all. Experiencing it is the most damning case against perfectionism I have ever come across. Expecting perfection only leaves you with two options: do everything right on the very first try, or don’t even bother. Which is actually only one option, since 9 times out of 10, human beings don't do things right on the first try.

Jenni Berrett writing in Ravishly

AI Definitions: Large Language Models

Large Language Models (LLMs) - AI trained on billions of language uses, images and other data. It can predict the next word or pixel in a pattern based on the user’s request. ChatGPT and Google Bard are LLMs. The kinds of text LLMs can parse out include grammar and language structure, word meaning and context (ex: The word green may mean a color when it is closely related to a word like “paint,” “art,” or “grass”), proper names (Microsoft, Bill Clinton, Shakira, Cincinnati), and emotions (indications of frustration, infatuation, positive or negative feelings, or types of humor).

More AI definitions

Imaginary Friends

There's a little bit of evidence that adults who are novelists or musicians, for example, tend to remember the imaginary friends they had when they were children. It's as if they are staying in touch with those childhood abilities in a way that most of us don't. Successful creative adults seem to combine the wide-ranging exploration and openness we see in children with the focus and discipline we see in adults.

Alison Gopnik, The Philosophical Baby

AI Definitions: Hallucinations

Hallucinations – When an AI provides responses that are inaccurate or not based on facts. Generative AI models are designed to generate data that is realistic or distributionally equivalent to the training data and yet different from the actual data used for training. This is why they are better at brainstorming than reflecting the real world and why they should not be treated as sources of truth or factual knowledge. Generative AI models can answer some questions correctly, but this is not what they are designed and trained to do. However, hallucinating AIs can be very useful to researchers, providing innovative insights that speed up the scientific process.

More AI definitions

Sometimes Experts can't tell AI Writing from Human Writing

It’s become common for writers to mock AI’s stilted, wooden, and em-dash-heavy writing style. But with some gentle coaxing, AI is much better at writing than professional writers want to admit. In one 2025 study, three top AI models were pitted against MFA-trained writers. In initial tests, expert readers clearly preferred the human writing. But once researchers fine-tuned ChatGPT on an individual author’s full body of work, the results flipped. Suddenly, experts preferred the AI’s writing and often couldn’t tell whether it came from a human or a machine. – Derek Thompson

AI Definitions: Foundation Models

Sitting at the core of many generative AI tools, a foundation model is starting point of many machine learning models. These deep-learning neural networks are trained on massive datasets. In contrasts with traditional machine learning models, which typically perform specific tasks, foundation models are adaptable and able to perform a wide range of tasks. These models are sometimes called Large X Models or LXMs. A video explanation.

More AI definitions

Extraordinary claims (require extraordinary evidence)

For some people, the less likely an explanation, the more likely they are to believe it. Take flat-Earth believers. Their claim rests on the idea that all the pilots, astronomers, geologists, physicists, and GPS engineers in the world are intentionally coordinating to mislead the public about the shape of the planet. From a prior odds perspective, the likelihood of a plot so enormous and intricate coming together out of all other conceivable possibilities is vanishingly small. But bizarrely, any demonstration of counterevidence, no matter how strong, just seems to cement their worldview further.

Liv Boeree writing in Vox

AI Definitions: World Models

World Models are AI systems that build up an internal approximation of an environment. Through trial and error, these bots use the representation to evaluate predictions and decisions before applying the results to real-world tasks. This contrasts with LLMs, which operate based on correlations within language and not on connections to the worth itself. In the late 1980s, world models fell out of favor with scientists working on artificial intelligence and robotics. The rise of machine learning has brought interest in developing these systems back to life.

More AI definitions

17 Articles about AI & the Military

An AI Plays Civic Watchdog

CalMatters this year launched a new feature that takes this kind of civic watchdog function a big step further. Its AI Tip Sheets feature uses AI to search through all of this data, looking for anomalies, such as a change in voting position tied to a large campaign contribution. These anomalies appear on a webpage that journalists can access to give them story ideas and a source of data and analysis to drive further reporting. - The Guardian

Why some Couples Endure

There are many reasons why relationships fail, but if you look at what drives the deterioration of many relationships, it’s often a breakdown of kindness. As the normal stresses of a life together pile up—with children, career, friend, in-laws, and other distractions crowding out the time for romance and intimacy—couples may put less effort into their relationship and let the petty grievances they hold against one another tear them apart. In most marriages, levels of satisfaction drop dramatically within the first few years together. But among couples who not only endure, but live happily together for years and years, the spirit of kindness and generosity guides them forward.

Emily Esfahani Smith writing in The Atlantic

Toxicity is harder for AI to fake than intelligence

"The next time you encounter an unusually polite reply on social media, you might want to check twice. It could be an AI model trying (and failing) to blend in with the crowd. A new study reveals that AI models remain easily distinguishable from humans in social media conversations, with overly friendly emotional tone serving as the most persistent giveaway. Also, the AI models struggled to match the level of casual negativity and spontaneous emotional expression common in human social media posts." -ArsTechnica

The Perfect Parent Trap

When perfectionists become parents, their mindsets don't change; they just shift their unreasonable expectations onto their children. Now their kids must be perfect too. In fact, a number of studies have found that perfectionists are so busy worrying about the drive for excellence that they aren't sensitive are responsive to the children's real needs.

Perfectionist parenting is anxious parenting. So that their children never make mistakes, these parents are overprotective, controlling, authoritarian, intrusive and dominating.

(Not that any of it helps: Research at Macquarie University in Australia showed that perfectionist parents’ tendencies to admonish kids and emphasize accuracy didn't decrease errors in children's work.)

Unsurprisingly kids of perfectionists are perfectionists too, adopting the same unreasonable expectations and exaggerated responses to failure. As a result, they're more likely to be anxious and obsessive. According to the University of Louisville researchers Nicholas Affrunti and Janet Woodriff-Borden, every time parents rush into fix something their kids learn their mistakes of threatening and they come to believe they can't be trusted to handle new experiences on the run.

And through their parents’ disengagement, kids learn that love is conditional. The only way to get it? Achieve.

Ashley Merryman, co-author of Top Dog: The Science of Winning and Losing