Sometimes Experts can't tell AI Writing from Human Writing

It’s become common for writers to mock AI’s stilted, wooden, and em-dash-heavy writing style. But with some gentle coaxing, AI is much better at writing than professional writers want to admit. In one 2025 study, three top AI models were pitted against MFA-trained writers. In initial tests, expert readers clearly preferred the human writing. But once researchers fine-tuned ChatGPT on an individual author’s full body of work, the results flipped. Suddenly, experts preferred the AI’s writing and often couldn’t tell whether it came from a human or a machine. – Derek Thompson

AI Definitions: Foundation Models

Sitting at the core of many generative AI tools, a foundation model is starting point of many machine learning models. These deep-learning neural networks are trained on massive datasets. In contrasts with traditional machine learning models, which typically perform specific tasks, foundation models are adaptable and able to perform a wide range of tasks. These models are sometimes called Large X Models or LXMs. A video explanation.

More AI definitions

Extraordinary claims (require extraordinary evidence)

For some people, the less likely an explanation, the more likely they are to believe it. Take flat-Earth believers. Their claim rests on the idea that all the pilots, astronomers, geologists, physicists, and GPS engineers in the world are intentionally coordinating to mislead the public about the shape of the planet. From a prior odds perspective, the likelihood of a plot so enormous and intricate coming together out of all other conceivable possibilities is vanishingly small. But bizarrely, any demonstration of counterevidence, no matter how strong, just seems to cement their worldview further.

Liv Boeree writing in Vox

AI Definitions: World Models

World Models are AI systems that build up an internal approximation of an environment. Through trial and error, these bots use the representation to evaluate predictions and decisions before applying the results to real-world tasks. This contrasts with LLMs, which operate based on correlations within language and not on connections to the worth itself. In the late 1980s, world models fell out of favor with scientists working on artificial intelligence and robotics. The rise of machine learning has brought interest in developing these systems back to life.

More AI definitions

17 Articles about AI & the Military

An AI Plays Civic Watchdog

CalMatters this year launched a new feature that takes this kind of civic watchdog function a big step further. Its AI Tip Sheets feature uses AI to search through all of this data, looking for anomalies, such as a change in voting position tied to a large campaign contribution. These anomalies appear on a webpage that journalists can access to give them story ideas and a source of data and analysis to drive further reporting. - The Guardian

Why some Couples Endure

There are many reasons why relationships fail, but if you look at what drives the deterioration of many relationships, it’s often a breakdown of kindness. As the normal stresses of a life together pile up—with children, career, friend, in-laws, and other distractions crowding out the time for romance and intimacy—couples may put less effort into their relationship and let the petty grievances they hold against one another tear them apart. In most marriages, levels of satisfaction drop dramatically within the first few years together. But among couples who not only endure, but live happily together for years and years, the spirit of kindness and generosity guides them forward.

Emily Esfahani Smith writing in The Atlantic

Toxicity is harder for AI to fake than intelligence

"The next time you encounter an unusually polite reply on social media, you might want to check twice. It could be an AI model trying (and failing) to blend in with the crowd. A new study reveals that AI models remain easily distinguishable from humans in social media conversations, with overly friendly emotional tone serving as the most persistent giveaway. Also, the AI models struggled to match the level of casual negativity and spontaneous emotional expression common in human social media posts." -ArsTechnica

The Perfect Parent Trap

When perfectionists become parents, their mindsets don't change; they just shift their unreasonable expectations onto their children. Now their kids must be perfect too. In fact, a number of studies have found that perfectionists are so busy worrying about the drive for excellence that they aren't sensitive are responsive to the children's real needs.

Perfectionist parenting is anxious parenting. So that their children never make mistakes, these parents are overprotective, controlling, authoritarian, intrusive and dominating.

(Not that any of it helps: Research at Macquarie University in Australia showed that perfectionist parents’ tendencies to admonish kids and emphasize accuracy didn't decrease errors in children's work.)

Unsurprisingly kids of perfectionists are perfectionists too, adopting the same unreasonable expectations and exaggerated responses to failure. As a result, they're more likely to be anxious and obsessive. According to the University of Louisville researchers Nicholas Affrunti and Janet Woodriff-Borden, every time parents rush into fix something their kids learn their mistakes of threatening and they come to believe they can't be trusted to handle new experiences on the run.

And through their parents’ disengagement, kids learn that love is conditional. The only way to get it? Achieve.

Ashley Merryman, co-author of Top Dog: The Science of Winning and Losing

30 Recent Articles about the Impact of AI on Health Care

Woman Scammed by Ad With Deepfake of Her Doctor – NBC’s Today Show

What the next generation of doctors needs to know about AI – WBUR  

AI Accurately Predicts Complication Risk After Kidney Cancer Surgery – Cancer Nursing Today

AI fails to reliably detect pediatric pneumonia on X-ray – Univ of Wisconsin Medicine 

People Are Uploading Their Medical Records to A.I. Chatbots – New York Times

Instead of an AI Health Coach, You Could Just Have Friends – Wired

The AI model that uses sounds like coughs & sniffles to predict early signs of disease – Mashable

The right place for AI companions in mental health care – Stat News

We found what you’re asking ChatGPT about health. A doctor scored its answers. – Washington Post

How AI can monitor your movements to improve your health – Fast Company

The perils of politeness: how large language models may amplify medical misinformation – Nature

How conspiracy theories infiltrated the doctor’s office - MIT Technology Review

Microsoft launches 'superintelligence' team targeting medical diagnosis to start – Reuters

AI steps in to detect the world's deadliest infectious disease – NPR

Evaluating the performance of large language models versus human researchers on real world complex medical queries – Nature

Agentic AI advantage for pharma - Mckinsey

5 Tips When Consulting ‘Dr.’ ChatGPT – New York Times

AI May Be the Cure for Doctor Burnout, After All – Newsweek

Answering your questions about using AI as a health care guide – Washington Post

RTP startup uses AI to fight health insurance denials – Axios

OpenEvidence, the ChatGPT for doctors, raises $200M at $6B valuation – TechCrunch

Low-quality papers are flooding the cancer literature — can this AI tool help to catch them? – Nature

New AI-powered model predicts which children are most at risk of developing sepsis—when the immune system overreacts to an infection—within 48 hours of an emergency room visit – Northwestern

How AI is taking over every step of drug discovery -  Chemical & Engineering News

Coalition for Health AI faces escalating attacks by Trump officials, loss of founding member Amazon – StatNews

Empathetic, Available, Cheap: When A.I. Offers What Doctors Don’t – New York Times

How AI scribes could usher in higher medical bills - StatNews

Academic misconduct and artificial intelligence use by medical students, interns and PhD students in Ukraine: a cross-sectional study – Springer

Review of Large Language Models for Patient and Caregiver Support in Cancer Care Delivery – ASCO  

A Prompt Engineering Framework for Large Language Model-Based Mental Health Chatbots: Conceptual Framework– PubMed

Majoring in AI

Why major in computer science when you can major in artificial intelligence? From the NYT: At MIT, a new program called “artificial intelligence and decision-making” has become the second most popular major. At the University of California, San Diego, 150 first-year students signed up for a new AI program. The State University of New York at Buffalo has created a stand-alone “department of AI and society.” More than 3,000 students enrolled in a new college of AI & cybersecurity at the University of South Florida.

A Painting not a Ladder

When you look at a painting from a distance, you see a larger, cohesive picture. But as you approach the canvas, you see that there are, in fact, hundreds of separate strokes that make up that picture. Think about your career as a work of art — expansive, independent movements that incrementally reveal a whole.

When we visualize a career ladder, we start putting ourselves in a box. Step back and see the painting — every experience adds a brushstroke to a bigger picture. 

Zainab Ghadiyali quoted in a FirstRound article 

How AI search (GEO) differs from SEO

AI Overviews and AI Mode are dramatically changing organic search traffic. Content creators are focusing on “position zero” — that is, in the search snippet or AI Overview, which appears at the top of many Google search result pages.  

The process of optimizing your website’s content to boost its visibility to AI-driven search engines (ChatGPT, Perplexity, Gemini, Copilot and Google AI) through GEO (generative engine optimization) has some similarities to increased visibility to search engines (Google, Microsoft Bing) through SEO (search engine optimization). SEO is a sort of guessing game, a digital Jeopardy! in which the person creating web content tries to anticipate the query that will bring users to their content. GEO has the same goal, only toward AI overviews and AI mode.

The game has some similarities for both SEO and GEO. They use keywords and contextual phrasing, prioritize engaging content and aim to connect with conversational user queries. Both consider how fast a website loads, mobile friendliness, and prefer technically sound websites.  

However, while SEO focuses on metatags, keywords and backlinks, AI models are trained to provide quick, direct responses from the synthesized content gathered from multiple sources. GEO is about, not only the query, but information about the user — from their social media footprint to their Google Docs usage. This informs, not only the search at hand, but future searches. AI will evaluate who created the content, its trustworthiness, and how it fits within the broader knowledge graph the AI is using.

Generative search efforts, therefore, attempt to fit into this reasoning process. AI judges the content value, not just on whether it ends up a part of the final answer, but whether it helps the model reason its way toward that answer. This is why, despite performing all the typical SEO common practices, a GEO effort may not make it to the other side of the AI reasoning pipeline. It’s not enough to be generally relevant to the final answer. Your content is now in direct competition with other plausible answers, so it must be more useful, precise, and complete than the next-best option. In fact, the same content could go through the pipeline a second time and yield a different result. And since newer models are rapidly changing right now, the best GEO may be effective when using an older model but not with a more recently trained model.  

There is also a user shift to consider toward longer, more natural queries, from one- or two-word keywords to three- and four-word search terms. Research indicates that queries in AI mode are generally two to three times the length of traditional searches.  

What do AI Overviews avoid? Content that is overly generalized, speculative, or optimized for clickbait over clarity. Vague and generic writing underperforms. So what kind of content does the Google AI Overviews favor?

  • Content that contains the who, what, why

  • Straightforward content offering distinctiveness; AI rewards niche-specific content

  • Is written in natural, conversational terms (AI will attempt to deliver its answer in that same way)

  • Uses strong introductory sentences that convey clear value 

  • Has H2 tags (subheadings) that align with user questions

  • Is structured to match common question structures (open, closed, probing)

  • Answers complex questions

  • Allows for restatement of quires and implied sub-questions, where a main question is broken down into smaller parts; content structured in a way to be easily grabbed — in citable chunks

  • Contains multi-faceted answers

  • Is rich in relationships

  • Has explicit logical structures and supports causal progression

  • Has clear headlines

  • Cites sources and has clear authorship

  • Includes statistics & quotations 

  • Has multimedia integration

  • Content that tells the world something new

  • Uses HTML anchor jump links to connect different sections of content to one another

  • Podcasts that include full transcripts in YouTube video descriptions, which are easily searchable

  • Appears on YouTube (a Google-owned company) based on the titles, descriptions & transcripts of videos

More information:

What is AI reading? Takeaways from a report on AI brand visibility

How AI Mode and AI Overviews work based on patents and why we need new strategic focus on SEO

What is generative engine optimization (GEO)?

How To Get Your Content (& Brand) Recommended By AI & LLMs

Google Ads data shows query length shift post-AI Mode

The winners and losers of Google’s AI Mode

SEO Is Dead. Say Hello to GEO

Stephen Goforth