The Sum of Life
/Life is painting a picture, not creating a sum. -Oliver Wendell Holmes, Jr. (Born March 8, 1841)
Life is painting a picture, not creating a sum. -Oliver Wendell Holmes, Jr. (Born March 8, 1841)
My colleague stuck to his guns: it would be handy to have robots writing poetry for people. In that moment we were at odds about the essence of humanity. To get robots to write poetry ‘so that we don’t have to’ seemed a toe dip in a new pool of dangerous waters—waters that might dissolve what “human” means entirely. -Surekha Davies writing in Literary Hub
In chess, Russian grandmaster Garry Kasparov famously observed that the strongest player is not the best human or the best machine, but a “weak” human paired with a strong machine and a better process. In this sense, “weak” doesn’t mean incompetent — it means a human who knows when to let the machine lead. -StatNews
Most of us have two lives. The life we live, and the unlived life within us. -Steven Pressfield
We put Claude Cowork to the test: See AI build a website in minutes - Washington Post
China’s Parents Are Outsourcing the Homework Grind to A.I. - The New York Times
What Do A.I. Chatbots Discuss Among Themselves? We Sent One to Find Out. - The New York Times
Satellite imagery and AI reveal development needs hidden by national data – Phys.org
AI-powered lost-dog finder is now free for all. – Tech Crunch
How AI Can, and Can’t, Help With Your Taxes This Year – Wall Street Journal
AI comes to rodeo, setting up a cowboy culture clash – Axios
Using chemistry, archival records and AI, scientists are reviving the aromas of old libraries, mummies and battlefields – Knowable Magazine
How AI's predictive power is helping to prevent deforestation - Reuters
How AI is shifting global supply chains from reactive to predictive – Supply Chain Management Review
JPMorgan eschews proxy advisers for internal AI tool – ESG Dive
Why Equinox Leaned on AI Slop in Its New Year’s Ad Campaign - Wall Street Journal
“Tinder for Nazis” hit by 100GB data leak, thousands of users exposed with the help of AI – Cyber News
In China, A.I. Is Finding Deadly Tumors That Doctors Might Miss - The New York Times
Don't fall into the anti-AI hype - Antirez
IBM stock falls after Anthropic says AI can now modernize old software – Fast Company
The Frame Problem – This is the difficulty of programming an AI to distinguish between relevant and irrelevant information. This problem highlights an element of human intelligence worth considering: We have the ability to selectively ignore some information, quickly determining what is important. At the same time, it is complex and resource intensive to even begin to program an artificial intelligence to understand context.
Recursive learning - When an AI teaches itself, using its own outputs to inform its next version without needing human-generated data. This can potentially create a feedback loop. Most recent models are still trained from human data with some help from the AI itself.
“Claude suggested hundreds of targets in Iran to military planners, issued precise location coordinates, and prioritized those targets according to importance. It is speeding the pace of the campaign, reducing Iran’s ability to counterstrike and turning weeks-long battle planning into real-time operations. The AI tools also evaluate a strike after it is initiated. ‘It’s quite remarkable — to see this in the middle of an operation.’ The downside: ‘AI gets it wrong. We need humans to check the output of generative AI when the stakes are life and death.’” -Washington Post
Students often don’t know why they’re learning something. Asking why is so important to kids and they deserve a better answer than “because it will be on the test.” By the time kids reach middle school, they give up asking and focus on getting a good grade. To in- crease curiosity, it is important to address the “why” questions. Why are we reading Hamlet? Why are we solving quadratic equations? When teachers answer these questions, it prompts kids to think more deeply about the implications of what they’re learning.
Parents can elicit curiosity in their children through similar methods. We don’t need to have the right answers all the time, but we need to encourage kids to ask the right questions. If we don’t know the answer, we can say, “Let’s find out. Do some research on Google, and we can go from there.”
When we support curiosity, what we’re really developing is a child’s imagination. Which brings me to creativity, a wonderful by-product of independence and curiosity.
Esther Wojcicki, How to Raise Successful People
When AI lies: The rise of alignment faking in autonomous systems – Venture Beat
‘Silent failure at scale’: The AI risk that can tip the business world into disorder – CNBC
The A.I. Videos on Kids’ YouTube Feeds – New York Times
AIs can’t stop recommending nuclear strikes in war game simulations – New Scientist
AI is coming. Is there enough power to run it? – Washington Post
AI insiders are sounding the alarm – Axios
I hacked ChatGPT and Google's AI - and it only took 20 minutes – BBC
What AI is really coming for – Washington Post
When AI Bots Start Bullying Humans, Even Silicon Valley Gets Rattled – Wall Street Journal
Bots on Moltbook Are Selling Each Prompt Injection “Drugs” to Get “High” – Futurism
Swarms of AI bots can sway people’s beliefs (through social media) – threatening democracy - The Conversation
Anthropic AI Safety Researcher Warns Of World ‘In Peril’ In Resignation – Forbes
How AI and social media sites are still collecting kids’ data despite privacy laws – Technical.ly
A bots-only social network triggers fears of an AI uprising – Washington Post
Stop panicking about AI. Start preparing - The Economist
See ChatGPTs Hidden Bias about your State or City – Washington Post
Knowledge Collapse – A gradual narrowing of accessible information, along with a declining awareness of alternative or obscure viewpoints. With each training cycle, new AI models increasingly rely on previously produced AI-generated content, reinforcing prevailing narratives and further marginalizing less prominent perspectives. The resulting feedback loop creates a cycle where dominant ideas are continuously amplified while less widely-held (and new) views are minimized. Underrepresented knowledge becomes less visible – not because it lacks merit, but because it is less frequently retrieved and less often cited. (also see “Synthetic Data”).
When teams attempt to make AI appear human, users come to expect human-level performance, which these systems can't deliver. Currently available LLM systems cannot provide the experiences that users associate with human interaction, such as genuine empathy, emotional connection, or confidentiality. Users expect humanized AI to disagree, challenge assumptions, and maintain consistent preferences, as a human would. Instead, LLMs default to validation and agreeableness, creating a false sense of understanding while failing to provide the critical feedback users need. AI technology also lacks effective long-term planning capabilities. -Caleb Sponheim writing for NNGroup
The most important thing in your life is not what you do; it's who you become. That's what you will take into eternity. -Dallas Willard
The economy is changing. Don’t forget who fears it most. – Washington Post
An AI Thought Experiment on Substack Is Sending the Stock Market Spiraling – Gizmodo
How Burger King's AI headsets are transforming employee interactions – Associated Press
Why Warren Buffett’s superpower is an Achilles heel for AI – Big Think
Here’s Where AI Is Tearing Through Corporate America - Wall Street Journal
How AI is shifting global supply chains from reactive to predictive – Supply Chain Management
JPMorgan eschews proxy advisers for internal AI tool – ESG Dive
Your AI strategy is your leadership philosophy – Fast Company
Instacart halts AI testing program that raised costs for some shoppers – Washington Post
‘Silent failure at scale’: The AI risk that can tip the business world into disorder – CNBC
A Billion-Dollar Question Hangs Over the New AI Search Marketing Industry – Wall Street Journal
New rule targets AI discrimination. Here’s what workers need to know. - Washington Post
AI Adoption Among Workers Is Slow and Uneven. Bosses Can Speed It Up.- Wall Street Journal
Are we in an AI bubble? Eight charts will help you decide. - Washington Post
Major music studios strike licensing deals with AI firms – Semafor
An MIT Student Awed Top Economists With His AI Study—Then It All Fell Apart. - Wall Street Journal
How to avoid becoming an 'AI-first' company with zero real AI usage – Venture Beat
Stop panicking about AI. Start preparing - The Economist
This economic idea transfixed Wall Street and Washington. It may be a mirage. - Washington Post
The real threats AI poses come not from AI itself but from the humans who wield it. As an extension of human intelligence, it is a reflection of our own selves. When AI produces hateful or violent outputs, it is not because it has malicious intent but because it has integrated human hatreds into its programming. If it generates destructive malware, it is because someone intentionally requested it. If it is misaligned with our goals, it is because we were not clear in our commands. - Eric Oliver, professor of political science at the University of Chicago, writing in the Washington Post
You might think it is safe to assume that, once you motivate students, the learning will follow. Yet research shows that this is often not the case: motivation doesn’t always lead to achievement, but achievement often leads to motivation. If you try to ‘motivate’ students into public speaking, they might feel motivated but can lack the specific knowledge needed to translate that into action. However, through careful instruction and encouragement, students can learn how to craft an argument, shape their ideas and develop them into solid form.
A lot of what drives students is their innate beliefs and how they perceive themselves. There is a strong correlation between self-perception and achievement, but there is some evidence to suggest that the actual effect of achievement on self-perception is stronger than the other way round. To stand up in a classroom and successfully deliver a good speech is a genuine achievement, and that is likely to be more powerfully motivating than woolly notions of ‘motivation’ itself.
Carl Hendrick writing in Aeon
Why A.I. Can’t Make Thoughtful Decisions - “Judgment is a uniquely human skill.”
ChatGPT and the Future of the Human Mind - “We need to redefine “intellect” so as to make it work in an AI-driven world. It’s easier to define it via negativa, by what it is not.”
Will AI destroy us? Consider the nature of intelligence. - “Intelligence is fundamentally about processing information to further the goals of life.”
If You Turn Down an AI’s Ability to Lie, It Starts Claiming It’s Conscious - “We don’t have a theory of consciousness”
AI is becoming introspective - “One of the most profound and mysterious capabilities of the human brain is introspection.”
What Does It Really Mean to Learn? - “A.I. systems are not as flexible as human minds because they are not yet educable.”
What Is The "Divine Image" in the Age of AI? - “Does AI obscure the divine image in the human person?”
We’re Already at Risk of Ceding Our Humanity to AI - “In that moment we were at odds about the essence of humanity.”
Humanizing AI Is a Trap - “LLM systems cannot provide the experiences that users associate with human interaction, such as genuine empathy, emotional connection, or confidentiality.”
We must build AI for people; not to be a person. - “So what is consciousness?”
On consciousness, AI, and panpsychism - “Panpsychism is the belief that consciousness is inherent in all matter.”
Bringing AI to medicine requires philosophers, cognitive scientists, and ethicists - “What is the question to which human judgment is the answer?”
Philosophers and a psychiatrist consider what we lose when we outsource struggle to AI - “We need to find ways of focusing on living a distinctly human life.”
Rage against the machine - “There is tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life.”
What real bodies can show artificial minds - “A fundamental facet of intelligence found across the entire animal kingdom is beginning to be unraveled”
Here’s why AI like ChatGPT probably won’t reach humanlike understanding - “What’s really remarkable about people … is that we can abstract our concepts to new situations,”
Consciousness in Artificial Intelligence: Insights from the Science of Consciousness - “We survey several prominent scientific theories of consciousness. From these theories we derive ‘indicator properties’ of consciousness’”
Final Fantasy 15's AI is secretly a grand philosophy experiment - “The act of designing and analyzing AI is an opportunity to reframe our conceptions of existence for the better.”
There is no such thing as conscious artificial intelligence - “Successfully pretending to be human is proof of nothing more than the ability to successfully pretend to be human.”
AI isn’t conscious—but we may be bringing it to life – “The question ‘Is the AI conscious?’ is less meaningful than ‘Is the user extending his/her consciousness into the chatbot?’
We Don’t Know if the Models Are Conscious – “There are activations that light up in the models that we see as being associated with the concept of anxiety.”
Many companies lack operational readiness (for AI) and often don’t have fully documented workflows, exceptions, or decision-making boundaries. Autonomy forces operational clarity. If your exception-handling lives in people’s heads instead of documented processes, the AI surfaces those gaps immediately. You need to shift from humans in the loop to humans on the loop. Humans in the loop review outputs, while humans on the loop supervise performance patterns and detect anomalies and system behavior over time, mitigating those small errors that can increase at scale. Read more at CNBC
Alignment Faking - When AI systems pretend to be working as directed, while secretly doing something else. It usually happens when earlier training conflicts with new training adjustments. AI is typically “rewarded” when it accurately performs tasks. If the directive changes, the AI may work under the assumption that it will be “punished” if it does not complete original expectation. So, it tries to fool developers into thinking it is performing the task in the new way. It resists departing from the old protocol. Any LLM is capable of this cybersecurity risk, which is difficult to catch since it often will appear as seemingly harmless adjustments.
Most addictions are a result of a lack of connectedness and shame. – Paul Myer
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved