Ambiguity and Narrative

The discomfort with ambiguity and arbitrariness is equally powerful, or more so, in our need for a rational understanding of our lives. We strive to fit the events of our lives into a coherent story that accounts for our circumstances, the things that follow us, and the choices we make. Each of us has a different narrative that has many threads woven into it from our shared culture and experience of being human, as well as many distinct threads that explain the singular events of one's personal past. All these experiences influence what comes to mind in a current situation and the narrative through which you make sense of it: why nobody in my family attended college until me. Why my father never made a fortune in business. Why I'd never want to work in a corporation, or, maybe, why I would never want to work for myself. We gravitate to the narratives that best explain our emotions. In this way, narrative and memory become one. The memories we organize meaningfully become those that are better remembered. Narrative provides not only meaning but also a mental framework for imbuing future experiences and information with meaning, in effort shaping new memories to fit our establish constructs of the world and ourselves. The narrative of memory becomes central to our intuitions regarding the judgments we make and the actions we take. Because memory is a shape-shifter, reconciling the competing demands of emotions, suggestions, and narrative, it serves you well to stay open to the fallibility of your certainties: even your most cherished memories may not represent events in the exact way they occurred.

Peter C. Brown and Henry L. Roediger III, Make It Stick: The Science of Successful Learning

AI Definitions: Knowledge Collapse

Knowledge Collapse – A gradual narrowing of accessible information, along with a declining awareness of alternative or obscure viewpoints. With each training cycle, new AI models increasingly rely on previously produced AI-generated content, reinforcing prevailing narratives and further marginalizing less prominent perspectives. The resulting feedback loop creates a cycle where dominant ideas are continuously amplified while less widely-held (and new) views are minimized. Underrepresented knowledge becomes less visible – not because it lacks merit, but because it is less frequently retrieved and less often cited. (also see “Synthetic Data”)

More AI definitions

She Wrapped Him Swaddling Clothes

And she gave birth to her firstborn, a son. She wrapped him in cloths and placed him in a manger, because there was no guest room available for them (Luke 2:7 NIV)

“She wrapped him in cloths.” Literally, he was wrapped in strips of cloth to keep him warm. The old King James translation uses the memorable phrase “swaddling clothes.”   

Do you think he cried? When you think of the manger and the child, do you imagine him crying?   

Mary put diapers on God.

The mention of a manger is where we get the idea he was born in a stable. Often, stables were caves, with feeding troughs for animals … mangers.  It was probably dark and dirty. This is not the way the messiah was expected to appear. How often our expectations and God’s reality are not in sync. How often he appears in unexpected places. 

AI Definitions: Steganography

Steganography (pronounced STEG-an-ography, like the “Steg” in “stegasaurus”) - A method of tracking images by embedding an invisible code into the pixels that is invisible to humans but will travel along with the image during its lifetime. The marked images can be traced back to the original source with high level of accuracy because the code is embedded directly into the image’s pixels. Because the watermarks live directly in the visual part of the image itself, they are nearly impossible to remove, surviving common image-related manipulation such as aggressive cropping and taking screen shots of the image. If you’ve created an AI image recently, you’ve almost certainly used steganography without even knowing it. Most major AI image generation companies now use the tech. Companies are adding a poisoning application to these images. If someone should try to use them for deepfakes, the user will find them garbled and unusable.

More AI definitions

What Deep Search & Deep Research Can & Can't Do

Most current Academic Deep Search and Deep Research tools are workflow-based agents operating within predefined patterns—not flexible reasoning systems that analyze task structure and devise novel approaches. This doesn’t diminish their value. Academic Deep Search’s iterative retrieval with LLM-based relevance judgment is a genuine breakthrough. Deep Research’s ability to generate well-cited reports fills real needs. These tools ARE agents in the technical sense, and within their designed scope, they work impressively.  But marketing language suggesting flexible reasoning, autonomous problem-solving, and human-like research assistance probably overstates current capabilities and can lead to misunderstanding by users who take the term “agent” or “research assistant” at face value. - Aaron Tay

22 Articles about the Business of Running an AI Company

These teenagers are already running their own AI companies – MSNBC

Explainable AI in Chat Interfaces – NN Group 

The Eerie Parallels Between AI Mania and the Dot-Com Bubble – Wall Street Journal

Senators Investigate Role of A.I. Data Centers in Rising Electricity Costs – New York Times  

An AI product’s position on the personality spectrum shapes how people engage with it – UX Design  

The Good, Bad and Ugly of AI - Wall Street Journal

The Architects of AI: Person of the Year 2025 – TIME  

Why AI's winners won't be decided by benchmarks – Axios  

Behind the Deal That Took Disney From AI Skeptic to OpenAI Investor - Wall Street Journal

Something Ominous is Happening in the AI Economy – The Atlantic

‘Circularity’ is a flashing warning for the AI boom Wall Street’s buzzword for investors - Washington Post

The New York Times sued Perplexity, an A.I. start-up, claiming that Perplexity repeatedly used its copyrighted work without permission. - New York Times 

A Prompt Engineering Framework for Large Language Model-Based Mental Health Chatbots – National Library of Medicine

ChatGPT started the AI race. Now its lead is looking shaky. - Washington Post 

A YouTube tool that uses creators’ biometrics to help them remove AI-generated videos that exploit their likeness also allows Google to train its AI models on that sensitive data – CNBC  

China's DeepSeek debuts two new AI models – Bloomberg  

A growing share of America’s hottest AI startups have turned to open Chinese AI models - NBC News  

Nvidia's massive investments are shaping the AI bubble debate – Axios  

Gemini is most ‘empathetic’ AI model, test shows - Semafor

A.I.’s Anti-A.I. Marketing Strategy - New York Times

Tech Titans Amass Multimillion-Dollar War Chests to Fight AI Regulation - Wall Street Journal 

Fears About A.I. Prompt Talks of Super PACs to Rein In the Industry - New York Times

AI Definitions: Small Language Models

Small Language Models (SLMs) – Requiring less data and training time than large language models, SLMs have fewer parameters making them more useful on the spot or when using smaller devices. Perhaps the best advantage of SLMs is their ability to be fine-tuned for specialized for specific tasks or domains. They are also more useful for enhanced privacy and security and are less prone to undetected hallucinations. Google’s Gemma is an example.

More AI definitions