AI Definitions: Circularity

Circularity – As AI companies invest in each other, money flows in a circular fashion, from one company to another and then back again. In effect, they prop up one another’s finances, in a similar fashion to what was known as “round-tripping” during the dot-com years. The result is an inflated performance without creating profits. The hope is that this will change over time, while larger concern is that demand for AI’s new products might never catch up with the capacity the industry is building.

More AI definitions

All-or-nothing thinking

I spend days at a time in bed, staring at the ceiling and thinking of all the things I could be doing but can’t because I know I would do them imperfectly. I lose countless hours to inner monologues filled with self-hatred and all-or-nothing thinking. I don’t read anything, instead preferring to slowly crush myself with the existential weight of knowing that I will never be able to read all the things.

For a very long time, I thought that I did this because I was lazy. I figured that if I just worked a little harder, tried a little more, then I would be able to accomplish the things I set out to do. Failing to do them was a failure of my character. It was because I was a bad person, or at least bad at being a person.

I told myself that I had to get my act together; I had to do all of these things so that I could prove I wasn’t the worthless piece of garbage I thought I was. When I inevitably cracked under that pressure, I took it as proof that I was a worthless piece of garbage.

If all of this sounds repetitive, that’s because it is. It’s a vicious, repetitive, monotonous cycle. It moves at breakneck speed, but also not at all. Experiencing it is the most damning case against perfectionism I have ever come across. Expecting perfection only leaves you with two options: do everything right on the very first try, or don’t even bother. Which is actually only one option, since 9 times out of 10, human beings don't do things right on the first try.

Jenni Berrett writing in Ravishly

AI Definitions: Large Language Models

Large Language Models (LLMs) - AI trained on billions of language uses, images and other data. It can predict the next word or pixel in a pattern based on the user’s request. ChatGPT and Google Bard are LLMs. The kinds of text LLMs can parse out include grammar and language structure, word meaning and context (ex: The word green may mean a color when it is closely related to a word like “paint,” “art,” or “grass”), proper names (Microsoft, Bill Clinton, Shakira, Cincinnati), and emotions (indications of frustration, infatuation, positive or negative feelings, or types of humor).

More AI definitions

Imaginary Friends

There's a little bit of evidence that adults who are novelists or musicians, for example, tend to remember the imaginary friends they had when they were children. It's as if they are staying in touch with those childhood abilities in a way that most of us don't. Successful creative adults seem to combine the wide-ranging exploration and openness we see in children with the focus and discipline we see in adults.

Alison Gopnik, The Philosophical Baby

AI Definitions: Hallucinations

Hallucinations – When an AI provides responses that are inaccurate or not based on facts. Generative AI models are designed to generate data that is realistic or distributionally equivalent to the training data and yet different from the actual data used for training. This is why they are better at brainstorming than reflecting the real world and why they should not be treated as sources of truth or factual knowledge. Generative AI models can answer some questions correctly, but this is not what they are designed and trained to do. However, hallucinating AIs can be very useful to researchers, providing innovative insights that speed up the scientific process.

More AI definitions

22 Recent Articles about Using AI

5 AI bots took our tough reading test. One was smartest — and it wasn’t ChatGPT. – Washington Post  

If You Turn Down an AI’s Ability to Lie, It Starts Claiming It’s Conscious – Futurism

The People Outsourcing Their Thinking to AI – The Atlantic  

What Is Agentic A.I., and Would You Trust It to Book a Flight? – New York Times

Staying Ahead of AI in Your Career – KD Nuggets 

How to talk to grandma about ChatGPT - Axios 

Research says being 'rude' to ChatGPT makes it more efficient — I ran a politeness test to find out – Tom’s Guide

SEO Is Dead: Welcome To GEO And Generative AI Search – Forbes

She used ChatGPT to win the Virginia lottery and then donated every dollar – Washington Post  

The risks of giving ChatGPT more personality – Axios

The state of AI in 2025: Agents, innovation, and transformation – McKinsey

Poll shows a generational divide in how Americans use AI for work, creativity, and personal connection – Milwaukee Independent  

AI for therapy? Some therapists are fine with it — and use it themselves. – Washington Post

Google introduces Gemini AI chatbot to Maps, enabling users to have voice conversations about businesses, landmarks, and hazards along routes – The Verge

These students, tech workers and artists just say no to AI - The Washington Post

5 Tips When Consulting ‘Dr.’ ChatGPT – New York Times

A Googler explains how to “meta prompt” for incredible Veo videos – Google

A Beginner’s Guide To Building AI Agents - Bernard Marr 

6 AI mistakes you should avoid when using chatbots - The Washington Post

I’m an A.I. Developer. Here’s How I’m Raising My Son. - New York Times

Is it ok for politicians to use AI? Survey shows where the public draws the line – The Conversation 

When should students begin learning about AI? – K-12 Dive

Sometimes Experts can't tell AI Writing from Human Writing

It’s become common for writers to mock AI’s stilted, wooden, and em-dash-heavy writing style. But with some gentle coaxing, AI is much better at writing than professional writers want to admit. In one 2025 study, three top AI models were pitted against MFA-trained writers. In initial tests, expert readers clearly preferred the human writing. But once researchers fine-tuned ChatGPT on an individual author’s full body of work, the results flipped. Suddenly, experts preferred the AI’s writing and often couldn’t tell whether it came from a human or a machine. – Derek Thompson

AI Definitions: Foundation Models

Sitting at the core of many generative AI tools, a foundation model is starting point of many machine learning models. These deep-learning neural networks are trained on massive datasets. In contrasts with traditional machine learning models, which typically perform specific tasks, foundation models are adaptable and able to perform a wide range of tasks. These models are sometimes called Large X Models or LXMs. A video explanation.

More AI definitions

Extraordinary claims (require extraordinary evidence)

For some people, the less likely an explanation, the more likely they are to believe it. Take flat-Earth believers. Their claim rests on the idea that all the pilots, astronomers, geologists, physicists, and GPS engineers in the world are intentionally coordinating to mislead the public about the shape of the planet. From a prior odds perspective, the likelihood of a plot so enormous and intricate coming together out of all other conceivable possibilities is vanishingly small. But bizarrely, any demonstration of counterevidence, no matter how strong, just seems to cement their worldview further.

Liv Boeree writing in Vox

AI Definitions: World Models

World Models are AI systems that build up an internal approximation of an environment. Through trial and error, these bots use the representation to evaluate predictions and decisions before applying the results to real-world tasks. This contrasts with LLMs, which operate based on correlations within language and not on connections to the worth itself. In the late 1980s, world models fell out of favor with scientists working on artificial intelligence and robotics. The rise of machine learning has brought interest in developing these systems back to life.

More AI definitions

17 Articles about AI & the Military

An AI Plays Civic Watchdog

CalMatters this year launched a new feature that takes this kind of civic watchdog function a big step further. Its AI Tip Sheets feature uses AI to search through all of this data, looking for anomalies, such as a change in voting position tied to a large campaign contribution. These anomalies appear on a webpage that journalists can access to give them story ideas and a source of data and analysis to drive further reporting. - The Guardian

AI Definitions: Moravec’s Paradox

Moravec’s Paradox - What is hard for humans is easy for machines, and what is easy for humans is hard for machines. For instance, a robot can play chess or hold an object still for hours on end with no problem. Tying a shoelace, catching a ball, or having a conversation is another matter. This is why AI excels at complex tasks like data analysis but also struggles with simple physical interactions, and why developing robots that are effective in the real world will take time and extraordinary technological advances. This paradox is attributed to Hans Moravec, an Austrian who worked at Carnegie Mellon.

More AI definitions