Imaginary Friends

There's a little bit of evidence that adults who are novelists or musicians, for example, tend to remember the imaginary friends they had when they were children. It's as if they are staying in touch with those childhood abilities in a way that most of us don't. Successful creative adults seem to combine the wide-ranging exploration and openness we see in children with the focus and discipline we see in adults.

Alison Gopnik, The Philosophical Baby

AI Definitions: Hallucinations

Hallucinations – When an AI provides responses that are inaccurate or not based on facts. Generative AI models are designed to generate data that is realistic or distributionally equivalent to the training data and yet different from the actual data used for training. This is why they are better at brainstorming than reflecting the real world and why they should not be treated as sources of truth or factual knowledge. Generative AI models can answer some questions correctly, but this is not what they are designed and trained to do. However, hallucinating AIs can be very useful to researchers, providing innovative insights that speed up the scientific process.

More AI definitions

22 Recent Articles about Using AI

5 AI bots took our tough reading test. One was smartest — and it wasn’t ChatGPT. – Washington Post  

If You Turn Down an AI’s Ability to Lie, It Starts Claiming It’s Conscious – Futurism

The People Outsourcing Their Thinking to AI – The Atlantic  

What Is Agentic A.I., and Would You Trust It to Book a Flight? – New York Times

Staying Ahead of AI in Your Career – KD Nuggets 

How to talk to grandma about ChatGPT - Axios 

Research says being 'rude' to ChatGPT makes it more efficient — I ran a politeness test to find out – Tom’s Guide

SEO Is Dead: Welcome To GEO And Generative AI Search – Forbes

She used ChatGPT to win the Virginia lottery and then donated every dollar – Washington Post  

The risks of giving ChatGPT more personality – Axios

The state of AI in 2025: Agents, innovation, and transformation – McKinsey

Poll shows a generational divide in how Americans use AI for work, creativity, and personal connection – Milwaukee Independent  

AI for therapy? Some therapists are fine with it — and use it themselves. – Washington Post

Google introduces Gemini AI chatbot to Maps, enabling users to have voice conversations about businesses, landmarks, and hazards along routes – The Verge

These students, tech workers and artists just say no to AI - The Washington Post

5 Tips When Consulting ‘Dr.’ ChatGPT – New York Times

A Googler explains how to “meta prompt” for incredible Veo videos – Google

A Beginner’s Guide To Building AI Agents - Bernard Marr 

6 AI mistakes you should avoid when using chatbots - The Washington Post

I’m an A.I. Developer. Here’s How I’m Raising My Son. - New York Times

Is it ok for politicians to use AI? Survey shows where the public draws the line – The Conversation 

When should students begin learning about AI? – K-12 Dive

Sometimes Experts can't tell AI Writing from Human Writing

It’s become common for writers to mock AI’s stilted, wooden, and em-dash-heavy writing style. But with some gentle coaxing, AI is much better at writing than professional writers want to admit. In one 2025 study, three top AI models were pitted against MFA-trained writers. In initial tests, expert readers clearly preferred the human writing. But once researchers fine-tuned ChatGPT on an individual author’s full body of work, the results flipped. Suddenly, experts preferred the AI’s writing and often couldn’t tell whether it came from a human or a machine. – Derek Thompson

AI Definitions: Foundation Models

Sitting at the core of many generative AI tools, a foundation model is starting point of many machine learning models. These deep-learning neural networks are trained on massive datasets. In contrasts with traditional machine learning models, which typically perform specific tasks, foundation models are adaptable and able to perform a wide range of tasks. These models are sometimes called Large X Models or LXMs. A video explanation.

More AI definitions

Extraordinary claims (require extraordinary evidence)

For some people, the less likely an explanation, the more likely they are to believe it. Take flat-Earth believers. Their claim rests on the idea that all the pilots, astronomers, geologists, physicists, and GPS engineers in the world are intentionally coordinating to mislead the public about the shape of the planet. From a prior odds perspective, the likelihood of a plot so enormous and intricate coming together out of all other conceivable possibilities is vanishingly small. But bizarrely, any demonstration of counterevidence, no matter how strong, just seems to cement their worldview further.

Liv Boeree writing in Vox

AI Definitions: World Models

World Models are AI systems that build up an internal approximation of an environment. Through trial and error, these bots use the representation to evaluate predictions and decisions before applying the results to real-world tasks. This contrasts with LLMs, which operate based on correlations within language and not on connections to the worth itself. In the late 1980s, world models fell out of favor with scientists working on artificial intelligence and robotics. The rise of machine learning has brought interest in developing these systems back to life.

More AI definitions

17 Articles about AI & the Military

An AI Plays Civic Watchdog

CalMatters this year launched a new feature that takes this kind of civic watchdog function a big step further. Its AI Tip Sheets feature uses AI to search through all of this data, looking for anomalies, such as a change in voting position tied to a large campaign contribution. These anomalies appear on a webpage that journalists can access to give them story ideas and a source of data and analysis to drive further reporting. - The Guardian

AI Definitions: Moravec’s Paradox

Moravec’s Paradox - What is hard for humans is easy for machines, and what is easy for humans is hard for machines. For instance, a robot can play chess or hold an object still for hours on end with no problem. Tying a shoelace, catching a ball, or having a conversation is another matter. This is why AI excels at complex tasks like data analysis but also struggles with simple physical interactions, and why developing robots that are effective in the real world will take time and extraordinary technological advances. This paradox is attributed to Hans Moravec, an Austrian who worked at Carnegie Mellon.

More AI definitions

Why some Couples Endure

There are many reasons why relationships fail, but if you look at what drives the deterioration of many relationships, it’s often a breakdown of kindness. As the normal stresses of a life together pile up—with children, career, friend, in-laws, and other distractions crowding out the time for romance and intimacy—couples may put less effort into their relationship and let the petty grievances they hold against one another tear them apart. In most marriages, levels of satisfaction drop dramatically within the first few years together. But among couples who not only endure, but live happily together for years and years, the spirit of kindness and generosity guides them forward.

Emily Esfahani Smith writing in The Atlantic

Toxicity is harder for AI to fake than intelligence

"The next time you encounter an unusually polite reply on social media, you might want to check twice. It could be an AI model trying (and failing) to blend in with the crowd. A new study reveals that AI models remain easily distinguishable from humans in social media conversations, with overly friendly emotional tone serving as the most persistent giveaway. Also, the AI models struggled to match the level of casual negativity and spontaneous emotional expression common in human social media posts." -ArsTechnica

The Perfect Parent Trap

When perfectionists become parents, their mindsets don't change; they just shift their unreasonable expectations onto their children. Now their kids must be perfect too. In fact, a number of studies have found that perfectionists are so busy worrying about the drive for excellence that they aren't sensitive are responsive to the children's real needs.

Perfectionist parenting is anxious parenting. So that their children never make mistakes, these parents are overprotective, controlling, authoritarian, intrusive and dominating.

(Not that any of it helps: Research at Macquarie University in Australia showed that perfectionist parents’ tendencies to admonish kids and emphasize accuracy didn't decrease errors in children's work.)

Unsurprisingly kids of perfectionists are perfectionists too, adopting the same unreasonable expectations and exaggerated responses to failure. As a result, they're more likely to be anxious and obsessive. According to the University of Louisville researchers Nicholas Affrunti and Janet Woodriff-Borden, every time parents rush into fix something their kids learn their mistakes of threatening and they come to believe they can't be trusted to handle new experiences on the run.

And through their parents’ disengagement, kids learn that love is conditional. The only way to get it? Achieve.

Ashley Merryman, co-author of Top Dog: The Science of Winning and Losing

30 Recent Articles about the Impact of AI on Health Care

Woman Scammed by Ad With Deepfake of Her Doctor – NBC’s Today Show

What the next generation of doctors needs to know about AI – WBUR  

AI Accurately Predicts Complication Risk After Kidney Cancer Surgery – Cancer Nursing Today

AI fails to reliably detect pediatric pneumonia on X-ray – Univ of Wisconsin Medicine 

People Are Uploading Their Medical Records to A.I. Chatbots – New York Times

Instead of an AI Health Coach, You Could Just Have Friends – Wired

The AI model that uses sounds like coughs & sniffles to predict early signs of disease – Mashable

The right place for AI companions in mental health care – Stat News

We found what you’re asking ChatGPT about health. A doctor scored its answers. – Washington Post

How AI can monitor your movements to improve your health – Fast Company

The perils of politeness: how large language models may amplify medical misinformation – Nature

How conspiracy theories infiltrated the doctor’s office - MIT Technology Review

Microsoft launches 'superintelligence' team targeting medical diagnosis to start – Reuters

AI steps in to detect the world's deadliest infectious disease – NPR

Evaluating the performance of large language models versus human researchers on real world complex medical queries – Nature

Agentic AI advantage for pharma - Mckinsey

5 Tips When Consulting ‘Dr.’ ChatGPT – New York Times

AI May Be the Cure for Doctor Burnout, After All – Newsweek

Answering your questions about using AI as a health care guide – Washington Post

RTP startup uses AI to fight health insurance denials – Axios

OpenEvidence, the ChatGPT for doctors, raises $200M at $6B valuation – TechCrunch

Low-quality papers are flooding the cancer literature — can this AI tool help to catch them? – Nature

New AI-powered model predicts which children are most at risk of developing sepsis—when the immune system overreacts to an infection—within 48 hours of an emergency room visit – Northwestern

How AI is taking over every step of drug discovery -  Chemical & Engineering News

Coalition for Health AI faces escalating attacks by Trump officials, loss of founding member Amazon – StatNews

Empathetic, Available, Cheap: When A.I. Offers What Doctors Don’t – New York Times

How AI scribes could usher in higher medical bills - StatNews

Academic misconduct and artificial intelligence use by medical students, interns and PhD students in Ukraine: a cross-sectional study – Springer

Review of Large Language Models for Patient and Caregiver Support in Cancer Care Delivery – ASCO  

A Prompt Engineering Framework for Large Language Model-Based Mental Health Chatbots: Conceptual Framework– PubMed