Who You Become
/The most important thing in your life is not what you do; it's who you become. That's what you will take into eternity. -Dallas Willard
The most important thing in your life is not what you do; it's who you become. That's what you will take into eternity. -Dallas Willard
The economy is changing. Don’t forget who fears it most. – Washington Post
An AI Thought Experiment on Substack Is Sending the Stock Market Spiraling – Gizmodo
How Burger King's AI headsets are transforming employee interactions – Associated Press
Why Warren Buffett’s superpower is an Achilles heel for AI – Big Think
Here’s Where AI Is Tearing Through Corporate America - Wall Street Journal
How AI is shifting global supply chains from reactive to predictive – Supply Chain Management
JPMorgan eschews proxy advisers for internal AI tool – ESG Dive
Your AI strategy is your leadership philosophy – Fast Company
Instacart halts AI testing program that raised costs for some shoppers – Washington Post
‘Silent failure at scale’: The AI risk that can tip the business world into disorder – CNBC
A Billion-Dollar Question Hangs Over the New AI Search Marketing Industry – Wall Street Journal
New rule targets AI discrimination. Here’s what workers need to know. - Washington Post
AI Adoption Among Workers Is Slow and Uneven. Bosses Can Speed It Up.- Wall Street Journal
Are we in an AI bubble? Eight charts will help you decide. - Washington Post
Major music studios strike licensing deals with AI firms – Semafor
An MIT Student Awed Top Economists With His AI Study—Then It All Fell Apart. - Wall Street Journal
How to avoid becoming an 'AI-first' company with zero real AI usage – Venture Beat
Stop panicking about AI. Start preparing - The Economist
This economic idea transfixed Wall Street and Washington. It may be a mirage. - Washington Post
The real threats AI poses come not from AI itself but from the humans who wield it. As an extension of human intelligence, it is a reflection of our own selves. When AI produces hateful or violent outputs, it is not because it has malicious intent but because it has integrated human hatreds into its programming. If it generates destructive malware, it is because someone intentionally requested it. If it is misaligned with our goals, it is because we were not clear in our commands. - Eric Oliver, professor of political science at the University of Chicago, writing in the Washington Post
You might think it is safe to assume that, once you motivate students, the learning will follow. Yet research shows that this is often not the case: motivation doesn’t always lead to achievement, but achievement often leads to motivation. If you try to ‘motivate’ students into public speaking, they might feel motivated but can lack the specific knowledge needed to translate that into action. However, through careful instruction and encouragement, students can learn how to craft an argument, shape their ideas and develop them into solid form.
A lot of what drives students is their innate beliefs and how they perceive themselves. There is a strong correlation between self-perception and achievement, but there is some evidence to suggest that the actual effect of achievement on self-perception is stronger than the other way round. To stand up in a classroom and successfully deliver a good speech is a genuine achievement, and that is likely to be more powerfully motivating than woolly notions of ‘motivation’ itself.
Carl Hendrick writing in Aeon
Why A.I. Can’t Make Thoughtful Decisions - “Judgment is a uniquely human skill.”
ChatGPT and the Future of the Human Mind - “We need to redefine “intellect” so as to make it work in an AI-driven world. It’s easier to define it via negativa, by what it is not.”
Will AI destroy us? Consider the nature of intelligence. - “Intelligence is fundamentally about processing information to further the goals of life.”
If You Turn Down an AI’s Ability to Lie, It Starts Claiming It’s Conscious - “We don’t have a theory of consciousness”
AI is becoming introspective - “One of the most profound and mysterious capabilities of the human brain is introspection.”
What Does It Really Mean to Learn? - “A.I. systems are not as flexible as human minds because they are not yet educable.”
What Is The "Divine Image" in the Age of AI? - “Does AI obscure the divine image in the human person?”
We’re Already at Risk of Ceding Our Humanity to AI - “In that moment we were at odds about the essence of humanity.”
Humanizing AI Is a Trap - “LLM systems cannot provide the experiences that users associate with human interaction, such as genuine empathy, emotional connection, or confidentiality.”
We must build AI for people; not to be a person. - “So what is consciousness?”
On consciousness, AI, and panpsychism - “Panpsychism is the belief that consciousness is inherent in all matter.”
Bringing AI to medicine requires philosophers, cognitive scientists, and ethicists - “What is the question to which human judgment is the answer?”
Philosophers and a psychiatrist consider what we lose when we outsource struggle to AI - “We need to find ways of focusing on living a distinctly human life.”
Rage against the machine - “There is tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life.”
What real bodies can show artificial minds - “A fundamental facet of intelligence found across the entire animal kingdom is beginning to be unraveled”
Here’s why AI like ChatGPT probably won’t reach humanlike understanding - “What’s really remarkable about people … is that we can abstract our concepts to new situations,”
Consciousness in Artificial Intelligence: Insights from the Science of Consciousness - “We survey several prominent scientific theories of consciousness. From these theories we derive ‘indicator properties’ of consciousness’”
Final Fantasy 15's AI is secretly a grand philosophy experiment - “The act of designing and analyzing AI is an opportunity to reframe our conceptions of existence for the better.”
There is no such thing as conscious artificial intelligence - “Successfully pretending to be human is proof of nothing more than the ability to successfully pretend to be human.”
AI isn’t conscious—but we may be bringing it to life – “The question ‘Is the AI conscious?’ is less meaningful than ‘Is the user extending his/her consciousness into the chatbot?’
We Don’t Know if the Models Are Conscious – “There are activations that light up in the models that we see as being associated with the concept of anxiety.”
Many companies lack operational readiness (for AI) and often don’t have fully documented workflows, exceptions, or decision-making boundaries. Autonomy forces operational clarity. If your exception-handling lives in people’s heads instead of documented processes, the AI surfaces those gaps immediately. You need to shift from humans in the loop to humans on the loop. Humans in the loop review outputs, while humans on the loop supervise performance patterns and detect anomalies and system behavior over time, mitigating those small errors that can increase at scale. Read more at CNBC
Alignment Faking - When AI systems pretend to be working as directed, while secretly doing something else. It usually happens when earlier training conflicts with new training adjustments. AI is typically “rewarded” when it accurately performs tasks. If the directive changes, the AI may work under the assumption that it will be “punished” if it does not complete original expectation. So, it tries to fool developers into thinking it is performing the task in the new way. It resists departing from the old protocol. Any LLM is capable of this cybersecurity risk, which is difficult to catch since it often will appear as seemingly harmless adjustments.
Most addictions are a result of a lack of connectedness and shame. – Paul Myer
What: This session will challenge us to recognize that the same media literacy competencies we teach our students are desperately needed by the seniors in our communities, and that each of us has the power to bridge this digital divide through patient, informed support.
Who: Lucy Gray, an educational technology veteran; Wesley Fryer, media literacy middle school teacher.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Media Education Lab
What: This webinar will explore the basic principles and pillars of solutions journalism, talk about why it’s important, explain key steps in reporting a solutions story, and share tips and resources for journalists interested in investigating how people are responding to social problems.
When: 9 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Solutions Journalism
What: Join us for a one-hour webinar discussion about AI's ever-growing thirst and how to investigate the story through a local lens.
Who: Luke Barratt, Senior Reporter, SourceMaterial; Peter Colohan, Lincoln Institute of Land Policy; Shubhangi Derhgawen, Investigative Reporter, Deutsche Welle; Shannon Mullane, Journalist, The Colorado Sun,
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Covering Climate Now
What: This session examines how AI-driven transformation is affecting jobs, labor markets, and organizational power dynamics, with attention to both worker experience and institutional design. Panelists will examine which roles are likely to change over time, how human–AI collaboration can be shaped in practice, and what organizational and policy approaches can help ensure technological innovation supports economic mobility and shared value.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: National Academies
What: This is the second major research study from FT Strategies and Northwestern University’s Knight Lab, exploring how next-gen news consumers are navigating information overload while still seeking trusted, relevant content. Based on quantitative and qualitative research across five countries, the report offers fresh insights into audience behaviour today — and what it means for newsrooms adapting. to rapid change.
Who: Jeremy Gilbert, Knight Chair for Digital Media Strategy, Northwestern University Medill School; Lamberto Lambertini, Insights Manager, FT Strategies; Oluwadunsin Sanya, Head of Editorial & Innovation, BellaNaija; Tai Nalon, Executive Director and Founder, Aos Fatos.
When: 10 am, Eastern
Where: Zoom
Cost: Free
Sponsors: FT Strategies & The Kinght Lab
More Info
What: Three academic librarians share how they are approaching AI literacy work on their own campuses. You’ll hear how faculty input informed decision-making, how student-facing instruction took shape, and how libraries can facilitate productive campus dialogue.
Who: Laura Pitts Assistant Professor of Library Services and Faculty Fellow for Experiential Learning, Jacksonville State University; Karlie Johnson History, Geography, and Anthropology Librarian at Jacksonville State University; Kim Westbrooks Associate Professor / Fine Arts Librarian at Jacksonville State University.
When: 12 pm, Eastern
Where: Zoom
Cost: $49
Sponsor: LJ & SLJ Professional Development
What: In this session, you will explore how to create a custom AI assistant tailored to your work at Duke. We will introduce you to MyGPT Builder, and we'll guide you through the fundamentals of crafting effective system prompts and supplying your assistant with a relevant knowledge base. You’ll explore real-world examples, gain practical tips for successful development, and discuss use cases across various academic, administrative, and research contexts.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Duke University
What: The potential benefits and harms of AI. How to navigate the shifting technology landscape to bring the public quality reporting on the latest developments in AI and its impacts.
Who: Joanna Kao, Pulitzer Center Staff; Kashmir Hill, New York Times reporter covering technology and privacy.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Pulitzer Center
What: Are you leading work at a journalism school or student publication interested in exploring solutions journalism? We are looking for our next cohort of Student Media Challenge participants.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Solutions Journalism Network
What: We’ll examine how government officials are increasingly labeling routine accountability reporting as “doxxing.” That term originally meant exposing personal information about private people to harass them. But now, government officials are extending it to publication of newsworthy information about public officials. They are intentionally confusing the American public about the role of journalism and even threatening legal action against journalists, newsrooms, and ordinary people for publishing information the public has a right to know.
Who: Vittoria Elliott, reporter at Wired covering platforms and power; Gregory Royal Pratt, investigative reporter at the Chicago Tribune; Doug Sovern, award-winning political reporter, formerly of KCBS Radio; Charlie Kratovil, founder and editor of New Brunswick Today; Moderated by Caitlin Vogus, senior adviser, FPF.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Freedom of the Press Foundation
What: In this 3-hour, hands-on workshop, you will co-build a functional Personal AI Coach with a simple, repeatable game plan. Together, you’ll create a goal-oriented coaching assistant that takes a personal or professional objective and generates a structured action plan, milestone steps, and key risks to consider. The session also shows how these skills scale into deeper, production-ready capability, making the Applied Generative AI Specialization the natural next step for learners who want real-world projects, use tools like Azure OpenAI and Copilot Studio, and portfolio-ready outcomes.
Who: Timothy Henize, AI engineer and Founder of The AI Handyman.
When: 8:30 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Simplilearn
What: A conference for independent journalists and creators to find community and build thriving businesses: 12 live, online sessions, plus bonus Q&A videos and editor panels.
Who: More than 45 writers and editors.
When: 10 am – 7 pm, Eastern each day
Where: Zoom
Cost: $99
Sponsor: Institute for Independent Journalists
What: Explore how organizations are building digital fluency across every level of the workforce. Learn practical strategies for developing technical capability, fostering data-driven decision-making and supporting a culture of continuous learning in an increasingly digital workplace.
Who: David Mantica, Managing Director, SoftEd; Michelle Pletch; VP Strategic Solution Development, ELB Learning.
When: 11 am – 3:15 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Training Industry
What: We’ll explore why advertisers are shifting budgets toward interactive experiences—sweepstakes, quizzes, native storytelling—and how you can turn these formats into recurring revenue streams. We’ll break down campaign ideas, real-world success stories, and strategies to boost advertiser ROI, deliver measurable results, and grow your advertising revenue.
Who: Julie Foley
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Local Media Association
What: This webinar introduces the Creative AI Loop: a framework that combines behavioral science, large-scale ad data, and predictive testing to validate creative impact early.
Who: Neuroscientist Thomas Zoëga Ramsøy
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: ADWEEK
What: Newsroom leaders from cities experiencing heightened ICE and law-enforcement activity will discuss how they are navigating the escalating challenges highlighted in the previous sessions. Building on the on-the-ground experiences, the panelists will outline the concrete safety measures, legal-risk preparations, and community partnerships they've developed to protect their reporters in the field. They’ll offer practical tips and candid reflections on leading teams through unpredictable and often dangerous reporting conditions.
Who: Hanaa Rifaey, Deputy Director, ONA; Meg Martin, associate director of the Minnesota Journalism Center; April Alonso, a visual journalist from Cicero, IL, and co-founder of Cicero Independiente; Mariah Castañeda, LA Public Press' Audience Director and Co-Founder.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Online News Association
Who: Amanda Bright, Clinical Associate Professor; and Director of the Cox Institute Journalism Innovation Lab at University of Georgia.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: University of Vermont
We take other men’s knowledge and opinions upon trust; which is an idle and superficial learning. We must make them our own. We are just like a man who, needing fire, went to a neighbor’s house to fetch it, and finding a very good one there, sat down to warm himself without remembering to carry any back home. What good does it do us to have our belly full of meat if it is not digested, if it is not transformed into us, if it does not nourish and support us?
Montaigne (born Feb 28, 1533)
AI Definitions: "Model Collapse"
AI Definitions: Symbolic Artificial Intelligence
The Software Development Lifecycle Is Dead (thanks to AI agents)
5 Python data validation libraries that each solve a specific class of often repeated problems
The top Python libraries for implementing progress bars
A foundation-model GeoAI framework for continuous heat and health risk mapping
Satellite imagery and AI reveal development needs hidden by national data
Top 5 Embedding Models for Your RAG Pipeline
AI Definitions: Imitation Learning
Data Science Certificate Programs for Professionals
AI Definitions: Convolutional neural networks
High-performance scene classification in remote sensing imagery using a custom deep CNN architecture
There's a crisis in particle physics. Researchers hope AI can help.
Model Collapse - The idea that AI can eat itself by running out of fresh data, so that it begins to train on it’s on product or the product of another AI. This would magnify errors and bias and make rare data more likely to be lost, leading to an erosion of diversity—not only ethnic diversity but linguistic diversity as the AI model’s vocabulary shrinks and its grammatical structure becomes less varied. In effect, the model becomes poisoned with its own projection of reality. Example
Leaders stretch people by taking people out of their comfort zone but never out of their gift zone. -John Maxwell
New research, which hasn’t yet been published, suggests that young people who grow dependent on AI may lose faith in their abilities without it. “These kids started believing less in themselves,” a professor of education at Stanford University said. - Washington Post
AI's populist moment - Axios
How A.I. Money Is Flooding Into the Midterm Elections – New York Times
Meta will spend $65 million this year to help state politicians who are friendly to the A.I. industry - New York Times
China’s Alibaba launches AI model to power robots as tech giants talk up ‘physical AI’ – CNBC
Move Fast, but Obey the Rules: China’s Vision for Dominating A.I. – New York Times
Pentagon threatens to cut off Anthropic in AI safeguards dispute – Axios
Use of Anthropic’s Claude in Venezuela Raid highlights growing role of AI in the Pentagon – Wall Street Journal
See how the Trump administration is using AI throughout the government - The Washington Post
Swarms of AI bots can sway people’s beliefs (through social media) – threatening democracy - The Conversation
Young people in China have a new alternative to marriage and babies: AI pets - The Washington Post
Trump admin reportedly plans to use AI to write federal regulations - Engadget
The UK government is backing AI that can run its own lab experiments – MIT Tech Review
South Korea Issues Strict New AI Rules - Wall Street Journal
Trump team touts a coming economic revolution as voters fear job losses - The Washington Post
How AI swallowed tech lobbying in 2025 – Axios
Trump's use of AI images further erodes public trust, experts say – PBS
A robotic dog made in China gets an Indian university kicked out of an AI summit – Associated Press
Symbolic Artificial Intelligence – This is where programmers meticulously define the rules that specify the behavior they want from an intelligent system. It works well when the environment is predictable, and the rules are clear-cut. Researchers believed that if they programmed enough rules and logic into computers, they could create machines capable of human-like reasoning. This was the dominant area of research for most of AI’s history until artificial neural networks became central to most of the recent AI developments. Although symbolic AI has lost its luster, most of the applications we use today depend on rule-based systems. An alternative approach to AI is machine learning. Some researchers believe the future of AI lies in a hybrid combination of these two approaches.
I tell my kids, play around, try things out. People need to know how to use an AI model, but not necessarily build it. Metacognitive skills will be very important—flexibility, adaptability, experimentation, thinking critically, being able to challenge things. Developing critical-thinking skills requires friction, doing things that are hard, doing deep thinking. For that, a traditional liberal-arts education is really important. Passing judgment, being accountable and responsible for decisions that impact people and society, that’s foundationally important. -Daniela Amodei, President and co-founder, Anthropic quoted in the Wall Street Journal
It isn’t as important whether you fulfill as your dreams as it is how you lived getting there.
Can we use AI for academic writing? It depends – Times Higher Ed
Why artificial intelligence detectors could penalize academic writing – Nature
Are AI Tools Killing Review Articles? Two Failure Modes Suggest Otherwise – Aaron Tay
Artificial Intelligence guidance for authors, peer reviewers, and editors: A content analysis of journal policies - Taylor & Francis
These Mathematicians Are Putting A.I. to the Test – New York Times
The Case of the Mysterious Citations – ArXiv
AI is advancing too quickly for research to keep up - Axios
AI 'Copy-Paste' Lands PhD Students in Trouble, UGC Rejects Dozens of Research Papers – Patrika
Open-source AI tool beats giant LLMs in literature reviews — and gets citations right – Nature
AI is not a peer, so it can’t do peer review – Times Higher Ed
Why write a literature review if AI can do it for you? – London School of Economics
On the troubling rise of generative AI suspicion in academic publishing – Nature
Researchers find nearly 300 papers at linguistics conferences contained hallucinated citations. - ArXiv
Self-Disclosed Use of AI in Research Submissions to BMJ Journals – JAMA
AI research deluge: why one conference is asking authors to rank their own papers – Nature
Why Authors Aren’t Disclosing AI Use and What Publishers Should (Not) do About It – Scholarly Kitchen
An AI Bot Is Making Podcasts With Scholars’ Research. Many of Them Aren’t Impressed. – Chronicle
After turning off ChatGPT’s ‘data consent’ option, two years of academic work vanished – Nature
ArXiv preprint server clamps down on AI slop - ArXiv
AI conference “accepted research papers with 100+ AI-hallucinated citations – Fortune
LLMs in Peer Review—How Publishing Policies Must Advance – JAMA
Why scholarly publishing needs a neutral governance body for the AI age – Research Information
From model collapse to citation collapse: risks of over-reliance on AI in the academy – Times Higher Ed
Qualitative researchers’ AI rejection is based on identity, not reason: The claim that AI can’t make meaning contradicts what researchers are finding – Times Higher Ed
AI research should always be verified, especially in court – Post Crescent
Invisible Text Injection and Peer Review by AI Models – JAMA
Artificial Intelligence and the Fraud Industry in Scientific Publishing (video) - Ministry of Science, Innovation and Universities, Spain
Here’s a good reason to turn that frown upside down: Optimistic people live as much as 15% longer than pessimists, according to a study spanning thousands of people and 3 decades. After controlling for health conditions, behaviors like diet and exercise, and other demographic information, the scientists were able to show that the most optimistic women (top 25%) lived an average of 14.9% longer than their more pessimistic peers. For the men the results were a bit less dramatic: The most optimistic of the bunch lived 10.9% longer than their peers, on average, the team reports today in the Proceedings of the National Academy of Sciences.
David Shultz writing in Science Magazine
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved