Extraordinary claims (require extraordinary evidence)

For some people, the less likely an explanation, the more likely they are to believe it. Take flat-Earth believers. Their claim rests on the idea that all the pilots, astronomers, geologists, physicists, and GPS engineers in the world are intentionally coordinating to mislead the public about the shape of the planet. From a prior odds perspective, the likelihood of a plot so enormous and intricate coming together out of all other conceivable possibilities is vanishingly small. But bizarrely, any demonstration of counterevidence, no matter how strong, just seems to cement their worldview further.

Liv Boeree writing in Vox

AI Definitions: World Models

World Models are AI systems that build up an internal approximation of an environment. Through trial and error, these bots use the representation to evaluate predictions and decisions before applying the results to real-world tasks. This contrasts with LLMs, which operate based on correlations within language and not on connections to the worth itself. In the late 1980s, world models fell out of favor with scientists working on artificial intelligence and robotics. The rise of machine learning has brought interest in developing these systems back to life.

More AI definitions

17 Articles about AI & the Military

An AI Plays Civic Watchdog

CalMatters this year launched a new feature that takes this kind of civic watchdog function a big step further. Its AI Tip Sheets feature uses AI to search through all of this data, looking for anomalies, such as a change in voting position tied to a large campaign contribution. These anomalies appear on a webpage that journalists can access to give them story ideas and a source of data and analysis to drive further reporting. - The Guardian

AI Definitions: Moravec’s Paradox

Moravec’s Paradox - What is hard for humans is easy for machines, and what is easy for humans is hard for machines. For instance, a robot can play chess or hold an object still for hours on end with no problem. Tying a shoelace, catching a ball, or having a conversation is another matter. This is why AI excels at complex tasks like data analysis but also struggles with simple physical interactions, and why developing robots that are effective in the real world will take time and extraordinary technological advances. This paradox is attributed to Hans Moravec, an Austrian who worked at Carnegie Mellon.

More AI definitions

Why some Couples Endure

There are many reasons why relationships fail, but if you look at what drives the deterioration of many relationships, it’s often a breakdown of kindness. As the normal stresses of a life together pile up—with children, career, friend, in-laws, and other distractions crowding out the time for romance and intimacy—couples may put less effort into their relationship and let the petty grievances they hold against one another tear them apart. In most marriages, levels of satisfaction drop dramatically within the first few years together. But among couples who not only endure, but live happily together for years and years, the spirit of kindness and generosity guides them forward.

Emily Esfahani Smith writing in The Atlantic

Toxicity is harder for AI to fake than intelligence

"The next time you encounter an unusually polite reply on social media, you might want to check twice. It could be an AI model trying (and failing) to blend in with the crowd. A new study reveals that AI models remain easily distinguishable from humans in social media conversations, with overly friendly emotional tone serving as the most persistent giveaway. Also, the AI models struggled to match the level of casual negativity and spontaneous emotional expression common in human social media posts." -ArsTechnica

The Perfect Parent Trap

When perfectionists become parents, their mindsets don't change; they just shift their unreasonable expectations onto their children. Now their kids must be perfect too. In fact, a number of studies have found that perfectionists are so busy worrying about the drive for excellence that they aren't sensitive are responsive to the children's real needs.

Perfectionist parenting is anxious parenting. So that their children never make mistakes, these parents are overprotective, controlling, authoritarian, intrusive and dominating.

(Not that any of it helps: Research at Macquarie University in Australia showed that perfectionist parents’ tendencies to admonish kids and emphasize accuracy didn't decrease errors in children's work.)

Unsurprisingly kids of perfectionists are perfectionists too, adopting the same unreasonable expectations and exaggerated responses to failure. As a result, they're more likely to be anxious and obsessive. According to the University of Louisville researchers Nicholas Affrunti and Janet Woodriff-Borden, every time parents rush into fix something their kids learn their mistakes of threatening and they come to believe they can't be trusted to handle new experiences on the run.

And through their parents’ disengagement, kids learn that love is conditional. The only way to get it? Achieve.

Ashley Merryman, co-author of Top Dog: The Science of Winning and Losing

30 Recent Articles about the Impact of AI on Health Care

Woman Scammed by Ad With Deepfake of Her Doctor – NBC’s Today Show

What the next generation of doctors needs to know about AI – WBUR  

AI Accurately Predicts Complication Risk After Kidney Cancer Surgery – Cancer Nursing Today

AI fails to reliably detect pediatric pneumonia on X-ray – Univ of Wisconsin Medicine 

People Are Uploading Their Medical Records to A.I. Chatbots – New York Times

Instead of an AI Health Coach, You Could Just Have Friends – Wired

The AI model that uses sounds like coughs & sniffles to predict early signs of disease – Mashable

The right place for AI companions in mental health care – Stat News

We found what you’re asking ChatGPT about health. A doctor scored its answers. – Washington Post

How AI can monitor your movements to improve your health – Fast Company

The perils of politeness: how large language models may amplify medical misinformation – Nature

How conspiracy theories infiltrated the doctor’s office - MIT Technology Review

Microsoft launches 'superintelligence' team targeting medical diagnosis to start – Reuters

AI steps in to detect the world's deadliest infectious disease – NPR

Evaluating the performance of large language models versus human researchers on real world complex medical queries – Nature

Agentic AI advantage for pharma - Mckinsey

5 Tips When Consulting ‘Dr.’ ChatGPT – New York Times

AI May Be the Cure for Doctor Burnout, After All – Newsweek

Answering your questions about using AI as a health care guide – Washington Post

RTP startup uses AI to fight health insurance denials – Axios

OpenEvidence, the ChatGPT for doctors, raises $200M at $6B valuation – TechCrunch

Low-quality papers are flooding the cancer literature — can this AI tool help to catch them? – Nature

New AI-powered model predicts which children are most at risk of developing sepsis—when the immune system overreacts to an infection—within 48 hours of an emergency room visit – Northwestern

How AI is taking over every step of drug discovery -  Chemical & Engineering News

Coalition for Health AI faces escalating attacks by Trump officials, loss of founding member Amazon – StatNews

Empathetic, Available, Cheap: When A.I. Offers What Doctors Don’t – New York Times

How AI scribes could usher in higher medical bills - StatNews

Academic misconduct and artificial intelligence use by medical students, interns and PhD students in Ukraine: a cross-sectional study – Springer

Review of Large Language Models for Patient and Caregiver Support in Cancer Care Delivery – ASCO  

A Prompt Engineering Framework for Large Language Model-Based Mental Health Chatbots: Conceptual Framework– PubMed

24 Recent Articles about AI Fakes

Researchers: Toxicity is harder for AI to fake than intelligence – Ars Technica 

Journalist Caught Publishing Fake Articles Generated by AI – Futurism  

AI video slop is everywhere, take our quiz to try and spot it – NPR

Deepfake of North Carolina lawmaker used in award-winning Whirlpool video - The Washington Post

An MIT Student Awed Top Economists With His AI Study—Then It All Fell Apart. – Wall Street Journal  

Deepfakes flood retailers ahead of peak holiday shopping – Axios

AI comes to local elections. Fake videos hit contentious school board races – Columbus Dispatch

Georgia Rep.’s campaign uses AI-generated deepfake of opponent in tight Senate showdown – CBS News  

Welcome to the Slopverse Generative AI isn’t hallucinatory. It is multiversal. – The Atlantic  

Town’s Christmas art contest ends in scandal: Did the winner use AI? - The Washington Post

The number one sign you're watching an AI video – BBC   

AI-generated evidence is showing up in court – NBC News

Investigating a Possible Scammer in Journalism’s AI Era – The Local

How would-be authors were fooled by AI in suspected global publishing scam – The Guardian

University of Hong Kong probes non-existent AI-generated references in paper; prof. says content not fabricated – Hong Kong Free Press  

People can't tell AI-generated music from real thing anymore, survey shows – CBS News 

Major Study Finds Many Mistakes in AI-Generated News Summaries – TV Tech

AI-generated news sites spout viral slop from forgotten URLs – Harvard’s Nieman Lab 

Deepfake Videos Are More Realistic Than Ever. Here's How to Spot if a Video Is Real or AI - CNET 

Teacher pleads guilty after being accused of using AI to make sexual videos of 8 students – KGNS-TV  

A YouTube tool that uses creators’ biometrics to help them remove AI-generated videos that exploit their likeness also allows Google to train its AI models on that sensitive data – CNBC  

Woman Scammed by Ad With Deepfake of Her Doctor – NBC’s Today Show

Woman accused of using AI to create fake burglary suspect – Fox13 Tampa Bay  

AI deepfakes are costing billions in fraud. Can you detect one? Take our quiz - NBC Bay Area

Majoring in AI

Why major in computer science when you can major in artificial intelligence? From the NYT: At MIT, a new program called “artificial intelligence and decision-making” has become the second most popular major. At the University of California, San Diego, 150 first-year students signed up for a new AI program. The State University of New York at Buffalo has created a stand-alone “department of AI and society.” More than 3,000 students enrolled in a new college of AI & cybersecurity at the University of South Florida.

A Painting not a Ladder

When you look at a painting from a distance, you see a larger, cohesive picture. But as you approach the canvas, you see that there are, in fact, hundreds of separate strokes that make up that picture. Think about your career as a work of art — expansive, independent movements that incrementally reveal a whole.

When we visualize a career ladder, we start putting ourselves in a box. Step back and see the painting — every experience adds a brushstroke to a bigger picture. 

Zainab Ghadiyali quoted in a FirstRound article 

How AI search (GEO) differs from SEO

AI Overviews and AI Mode are dramatically changing organic search traffic. Content creators are focusing on “position zero” — that is, in the search snippet or AI Overview, which appears at the top of many Google search result pages.  

The process of optimizing your website’s content to boost its visibility to AI-driven search engines (ChatGPT, Perplexity, Gemini, Copilot and Google AI) through GEO (generative engine optimization) has some similarities to increased visibility to search engines (Google, Microsoft Bing) through SEO (search engine optimization). SEO is a sort of guessing game, a digital Jeopardy! in which the person creating web content tries to anticipate the query that will bring users to their content. GEO has the same goal, only toward AI overviews and AI mode.

The game has some similarities for both SEO and GEO. They use keywords and contextual phrasing, prioritize engaging content and aim to connect with conversational user queries. Both consider how fast a website loads, mobile friendliness, and prefer technically sound websites.  

However, while SEO focuses on metatags, keywords and backlinks, AI models are trained to provide quick, direct responses from the synthesized content gathered from multiple sources. GEO is about, not only the query, but information about the user — from their social media footprint to their Google Docs usage. This informs, not only the search at hand, but future searches. AI will evaluate who created the content, its trustworthiness, and how it fits within the broader knowledge graph the AI is using.

Generative search efforts, therefore, attempt to fit into this reasoning process. AI judges the content value, not just on whether it ends up a part of the final answer, but whether it helps the model reason its way toward that answer. This is why, despite performing all the typical SEO common practices, a GEO effort may not make it to the other side of the AI reasoning pipeline. It’s not enough to be generally relevant to the final answer. Your content is now in direct competition with other plausible answers, so it must be more useful, precise, and complete than the next-best option. In fact, the same content could go through the pipeline a second time and yield a different result. And since newer models are rapidly changing right now, the best GEO may be effective when using an older model but not with a more recently trained model.  

There is also a user shift to consider toward longer, more natural queries, from one- or two-word keywords to three- and four-word search terms. Research indicates that queries in AI mode are generally two to three times the length of traditional searches.  

What do AI Overviews avoid? Content that is overly generalized, speculative, or optimized for clickbait over clarity. Vague and generic writing underperforms. So what kind of content does the Google AI Overviews favor?

  • Content that contains the who, what, why

  • Straightforward content offering distinctiveness; AI rewards niche-specific content

  • Is written in natural, conversational terms (AI will attempt to deliver its answer in that same way)

  • Uses strong introductory sentences that convey clear value 

  • Has H2 tags (subheadings) that align with user questions

  • Is structured to match common question structures (open, closed, probing)

  • Answers complex questions

  • Allows for restatement of quires and implied sub-questions, where a main question is broken down into smaller parts; content structured in a way to be easily grabbed — in citable chunks

  • Contains multi-faceted answers

  • Is rich in relationships

  • Has explicit logical structures and supports causal progression

  • Has clear headlines

  • Cites sources and has clear authorship

  • Includes statistics & quotations 

  • Has multimedia integration

  • Content that tells the world something new

  • Uses HTML anchor jump links to connect different sections of content to one another

  • Podcasts that include full transcripts in YouTube video descriptions, which are easily searchable

  • Appears on YouTube (a Google-owned company) based on the titles, descriptions & transcripts of videos

More information:

What is AI reading? Takeaways from a report on AI brand visibility

How AI Mode and AI Overviews work based on patents and why we need new strategic focus on SEO

What is generative engine optimization (GEO)?

How To Get Your Content (& Brand) Recommended By AI & LLMs

Google Ads data shows query length shift post-AI Mode

The winners and losers of Google’s AI Mode

SEO Is Dead. Say Hello to GEO

Stephen Goforth

Loss Aversion

People hate losses. Roughly speaking, losing something makes you twice as miserable as gaining the same thing makes you happy. In more technical language, people are “loss averse.” How do we know this?

Consider a simple experiment. Half the students in a class are given coffee mugs with the insignia of their home university embossed on it. The students who did not get a mug are asked to examine their neighbor’s mugs. Then, mug owners are invited to sell their mugs and nonowners are invited to buy them. They do so by answering the question “At each of the following prices, indicate whether you would be willing to (give up your mug/buy a mug).”

The results show that those with mugs demand roughly twice as much to give up their mugs as others are willing to pay to get one. Thousands of mugs have been used in dozens of replications of this experiment, but the results are nearly always the same. Once I have a mug, I don’t want to give it up. But if I don’t have one, I don’t feel an urgent need to buy one.

What this means is that people do not assign specific values to objects. When they have to give something up, they are hurt more than they are pleased if they acquire the very same things.

Richard Thaler & Cass Sunstein, Nudge