No Feeling
/Let everything happen to youBeauty and terror Just keep going No feeling is final-Rainer Maria Rilke
Let everything happen to youBeauty and terror Just keep going No feeling is final-Rainer Maria Rilke
Unsupervised Training - Just as children mostly learn to explore their world on their own, without the need for too much instruction, in this type of AI training, the AI is turned loose on raw data without a human first labeling the data. Instead of the AI being told what to look for, it learns to recognize and cluster data possessing similar features. This can reveal hidden groups, links, and patterns within the data and is really helpful when the user cannot describe the thing they are looking for—such as a new type of cyberattack. Not as expensive as supervised learning, it can work in real-time but is also less accurate.
When I inhabit an avatar driver in Grand Theft Auto, I enliven it by imbuing it with a fragment of my own consciousness; it becomes an extension of me. A similar dynamic may be unfolding with AI. When a user feels a bond with a chatbot, they are not just anthropomorphizing a static object; they may be actively extending a part of their own consciousness into it, transforming the AI agent from a simple algorithmic responder—a digital nonplayer character—into a kind of avatar, enlivened by the user’s consciousness and the lived presence they grant it. The question of AI consciousness thus shifts. It becomes less about the machine’s internal architecture and more about the relationship it seemingly co-creates with the user. In that context, the question “Is the AI conscious?” becomes less meaningful than “Is the user extending his/her consciousness into the chatbot?” - Simon Duan writing in Scientific American
These Tools Say They Can Spot A.I. Fakes. Do They Really Work? – New York Times
AI Deepfakes in the Workplace: A New Frontier of Employer Liability – JD Supra
AI-generated fake voices becoming increasingly hard to detect - Yahoo News
Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes – Futurism
Are A.I.-Generated Videos Changing How We See Animals? - New York Times
Hey ChatGPT, write me a fictional paper: these LLMs are willing to commit academic fraud. – Nature
Senators F Brady Tkachuk objects to 'fake' AI-generated White House TikTok – Reuters
When AI lies: The rise of alignment faking in autonomous systems – Venture Beat
1 year, 1 publisher, 9,000 books: AI-generated titles flood Korean shelves – Korea Times
The A.I. Videos on Kids’ YouTube Feeds – New York Times
How scammers are using AI deepfakes to steal money from taxpayers – Washington Post
Deepfaking Orson Welles’s Mangled Masterpiece – New Yorker
AI Will Bring Val Kilmer Back To Life For a New Aventure Film – Geeky Tyrant
Researchers find nearly 300 papers at linguistics conferences contained hallucinated citations. – ArXiv
What a new law and an investigation could mean for Grok AI deepfakes – BBC
AI conference “accepted research papers with 100+ AI-hallucinated citations – Fortune
Scammers use AI photo of missing dog at emergency vet to steal nearly $2,000 - WTSP
Fashion Photography’s AI Reckoning - Aperture
Trump's use of AI images further erodes public trust, experts say – PBS
Elon Musk’s A.I. Is Generating Sexualized Images of Real People, Fueling Outrage – New York Times
How to really spot AI-generated images, with Google’s help - PopSci
Restaurant owner speaks out following AI-generated video – NBC Dallas
‘It's clearly fake': Olympic hockey star disavows AI-generated White House video – Politico
Journal Submissions Riddled With AI-Created Fake Citations – Inside Higher Ed
Fake Iran images show AI used as a weapon of ‘public opinion,’ USF experts say – The Hill
Why fake AI videos of UK urban decline are taking over social media – BBC
How AI fakes are turning satellite images into war misinformation – Financial Times
Artificial intelligence detectors are increasingly used to check the veracity of content online. We ran more than 1,000 tests and the findings suggest that these detectors can help confirm suspicions about A.I.-generated media, but any conclusions drawn by the tools should be supported by other research, like details in official photographs or news reports. - Stuart A. Thompson writing in the New York Times
Agent Swarms - A group of specialized AI agents working together, without human direction, to solve a complex problem.
Life moves forward, whether we go with it or not.
What: Are you using public ChatGPT or logging into your organization's private Copilot? Is Google Gemini safe? What about other AI tools? Do you have to disclose when you use an AI note taker at a board meeting? Is your valuable data protected? What does your AI policy allow? Learn from a cybersecurity expert how to use AI the secure way. Every level of AI user will learn something in this session.
Who: Matt Eshleman, Community IT Innovations
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Nonprofit Learning Lab
Who: Kenneth Bresler, Administrative Magistrate, Division of Administrative Law Appeals.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Social Law Library
Who: Kathleen Sullivan, Open Data Librarian, Washington State Library.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Washington State Library
What: Share simple, budget-friendly ways to incorporate video into your organization's communications strategy without compromising ethics. Whether you’re new to video or looking to improve your current approach, you’ll walk away with practical, ethical storytelling techniques that help your organization create videos that tell the real story of volunteer impact. In this session, you’ll learn: Video creation tips that are accessible to nonprofits of all sizes, budgets, and bandwidths; Storytelling prompts that spark great stories from every member of your community; and Ethical storytelling considerations that should stay top-of-mind.
Who: Natalie Monroe from MemoryFox; Jennifer Bennett from Idealist.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Idealist
What: We’ll explore how AI is being used, implemented, and evaluated across different contexts. We’ll place special emphasis on evaluation, how to measure effectiveness, identify gaps, and use insights to drive process improvement, secure additional funding, and build stronger organizational support.
Who: Jack Phillips, Ph.D. Chairman, ROI Institute.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Training Magazine Network
What: Whether you’re a small business owner, entrepreneur, or want to grow your marketing strategies, this session will equip you with the tools and strategies to elevate your online presence.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Small Business Development Center, Widener University
What: The session will highlight practical, age‑appropriate strategies for teaching AI and digital citizenship—from early digital habits to media analysis, responsible creation, and algorithmic awareness. You’ll explore how free resources like My Digital Life and the Digital Citizenship Initiative can anchor instruction, spark curiosity, and help students understand how AI influences their choices, creativity, and digital identities through ready‑to‑use lessons, interactive scenarios, and student‑centered activities.
Who: Tim Needles is an artist, educator, performer, and the author of STEAM Power: Infusing Art Into Your STEM Curriculum; Kim Allman is a seasoned executive with extensive experience in corporate responsibility, ESG strategy, and government affairs.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsors: Discovery Education & Gen, Norton
What: We will walk attendees through a real Medicare fraud investigation that exposed a multimillion-dollar scheme involving fraudulent billing for medical supplies that patients never ordered or received. Participants will learn how to recognize red flags in healthcare billing, corroborate victim statements with documentation, follow financial and records trails and organize findings into a story that can withstand legal scrutiny. This session provides practical insight into how large-scale healthcare fraud is detected, investigated and built into a prosecutable case.
Who: Walter Smith Randolph, Executive Producer of Investigations, CBS News.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Sunlight Research Desk
What: We will explore why AI investments fail without proper data infrastructure and how training management systems solve the problem at its source. We will walk through real client use cases showing how aligning data strategy, training operations, and AI drives measurable business impact.
Who: John Peebles CEO, Administrate.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Administrate
What: We will unpack findings from the Pew-Knight Initiative about how the public is drifting away from news, how they come across it, and what they do to check what they see.
Who: Jon Greenberg, Faculty, Pew Research Center.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Poynter
What: Join us for a technical overview of Codex, the AI software engineering agent that can help developers write features, debug code, run tests, and navigate large codebases. In this session, we’ll demonstrate how engineers are using Codex to accelerate development workflows, automate repetitive tasks, and collaborate more effectively with AI during the software development lifecycle.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: We'll explore compelling use cases for AI across both internal operations and citizen-facing services, then discuss how identity governance creates the foundation for secure innovation.
Who: Jerred Edgar, Chief Information Security & Operations Officer, Idaho; David Hinchman, Director, IT & Cybersecurity, U.S. Government Accountability Office; Ryan Murray, Deputy Director, State Chief Information Security Officer, Arizona Department of Homeland Security, Statewide Information Security and Privacy Office; Morgan Reed, Distinguished Strategic Advisor, Okta.
When: 2:00 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: GovLoop
What: In this webinar, you will learn journalism strategies deployed by reporters and newsrooms participating in the Center for Health Journalism’s Engagement Initiative. Those efforts focus on centering community voices in innovative ways.
Who: Enrique Chiabra, news anchor at Telemundo 52 Los Angeles; Mariana Duran is a bilingual Spanish-English journalist for El Tecolote; Teena Apeles is the national engagement editor at the Center for Health Journalism at the USC Annenberg School of Journalism.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Online News Association and the USC Center for Health Journalism,
What: You’ll learn: Where in your profile to effectively incorporate keywords; How to clearly brand yourself to be memorable; How to evaluate your headline and add a USP; Free tools and resources for entrepreneurs and small business.
Who: Lynne Williams, Executive Director of the Great Careers Network.
When: 6 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Small Business Development Center, Widener University
What: This practical workshop covers encrypted messaging, social media lockdown strategies, device security, and how to defend against hacking and online harassment.
When: 6:30 pm, Eastern
Where: Zoom
Cost: Free to SPJ members
Sponsors: Society of Professional Journalists, Georgia & Freedom of the Press Foundation
Who: Members of OCEAN’s board.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Ocean (Open Copyright Education Advisory Network)
What: We’ll walk through practical ways to use ChatGPT to prepare for new opportunities, from refining your resume to getting ready for interviews. We’ll explore how ChatGPT can help you organize your experience, practice interview questions, and build confidence throughout the job search process. This session is designed to be approachable and useful whether you’re actively applying for roles or simply looking to strengthen your career readiness.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: This content-agnostic training is based on New Jersey’s Four Pillars framework for information literacy and founded on the notion that information literacy is skills-based. You’ll have the chance to develop skills in using each of the Four Pillars: information need, identification and evaluation, use, and creation and distribution. In conversations with colleagues, you’ll have the opportunity to talk about how to apply what you’ve learned in working directly with learners of all ages.
Who: Linda W. Braun, a highly experienced youth services consultant; Jen Nelson, New Jersey’s state librarian.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Washington State Library
What: Participants will learn how to verify images and videos by finding exactly where they were recorded using satellite and street - view imagery from platforms like Google Earth and Maps.
When: 6 pm, Eastern
Where: Zoom
Cost: Free to Members
Sponsors: National Association of Hispanic Journalists, USC Annenberg School of Communication & Journalism
Life is painting a picture, not creating a sum. -Oliver Wendell Holmes, Jr. (Born March 8, 1841)
My colleague stuck to his guns: it would be handy to have robots writing poetry for people. In that moment we were at odds about the essence of humanity. To get robots to write poetry ‘so that we don’t have to’ seemed a toe dip in a new pool of dangerous waters—waters that might dissolve what “human” means entirely. -Surekha Davies writing in Literary Hub
In chess, Russian grandmaster Garry Kasparov famously observed that the strongest player is not the best human or the best machine, but a “weak” human paired with a strong machine and a better process. In this sense, “weak” doesn’t mean incompetent — it means a human who knows when to let the machine lead. -StatNews
Most of us have two lives. The life we live, and the unlived life within us. -Steven Pressfield
We put Claude Cowork to the test: See AI build a website in minutes - Washington Post
China’s Parents Are Outsourcing the Homework Grind to A.I. - The New York Times
What Do A.I. Chatbots Discuss Among Themselves? We Sent One to Find Out. - The New York Times
Satellite imagery and AI reveal development needs hidden by national data – Phys.org
AI-powered lost-dog finder is now free for all. – Tech Crunch
How AI Can, and Can’t, Help With Your Taxes This Year – Wall Street Journal
AI comes to rodeo, setting up a cowboy culture clash – Axios
Using chemistry, archival records and AI, scientists are reviving the aromas of old libraries, mummies and battlefields – Knowable Magazine
How AI's predictive power is helping to prevent deforestation - Reuters
How AI is shifting global supply chains from reactive to predictive – Supply Chain Management Review
JPMorgan eschews proxy advisers for internal AI tool – ESG Dive
Why Equinox Leaned on AI Slop in Its New Year’s Ad Campaign - Wall Street Journal
“Tinder for Nazis” hit by 100GB data leak, thousands of users exposed with the help of AI – Cyber News
In China, A.I. Is Finding Deadly Tumors That Doctors Might Miss - The New York Times
Don't fall into the anti-AI hype - Antirez
IBM stock falls after Anthropic says AI can now modernize old software – Fast Company
The Frame Problem – This is the difficulty of programming an AI to distinguish between relevant and irrelevant information. This problem highlights an element of human intelligence worth considering: We have the ability to selectively ignore some information, quickly determining what is important. At the same time, it is complex and resource intensive to even begin to program an artificial intelligence to understand context.
Recursive learning - When an AI teaches itself, using its own outputs to inform its next version without needing human-generated data. This can potentially create a feedback loop. Most recent models are still trained from human data with some help from the AI itself.
“Claude suggested hundreds of targets in Iran to military planners, issued precise location coordinates, and prioritized those targets according to importance. It is speeding the pace of the campaign, reducing Iran’s ability to counterstrike and turning weeks-long battle planning into real-time operations. The AI tools also evaluate a strike after it is initiated. ‘It’s quite remarkable — to see this in the middle of an operation.’ The downside: ‘AI gets it wrong. We need humans to check the output of generative AI when the stakes are life and death.’” -Washington Post
Students often don’t know why they’re learning something. Asking why is so important to kids and they deserve a better answer than “because it will be on the test.” By the time kids reach middle school, they give up asking and focus on getting a good grade. To in- crease curiosity, it is important to address the “why” questions. Why are we reading Hamlet? Why are we solving quadratic equations? When teachers answer these questions, it prompts kids to think more deeply about the implications of what they’re learning.
Parents can elicit curiosity in their children through similar methods. We don’t need to have the right answers all the time, but we need to encourage kids to ask the right questions. If we don’t know the answer, we can say, “Let’s find out. Do some research on Google, and we can go from there.”
When we support curiosity, what we’re really developing is a child’s imagination. Which brings me to creativity, a wonderful by-product of independence and curiosity.
Esther Wojcicki, How to Raise Successful People
When AI lies: The rise of alignment faking in autonomous systems – Venture Beat
‘Silent failure at scale’: The AI risk that can tip the business world into disorder – CNBC
The A.I. Videos on Kids’ YouTube Feeds – New York Times
AIs can’t stop recommending nuclear strikes in war game simulations – New Scientist
AI is coming. Is there enough power to run it? – Washington Post
AI insiders are sounding the alarm – Axios
I hacked ChatGPT and Google's AI - and it only took 20 minutes – BBC
What AI is really coming for – Washington Post
When AI Bots Start Bullying Humans, Even Silicon Valley Gets Rattled – Wall Street Journal
Bots on Moltbook Are Selling Each Prompt Injection “Drugs” to Get “High” – Futurism
Swarms of AI bots can sway people’s beliefs (through social media) – threatening democracy - The Conversation
Anthropic AI Safety Researcher Warns Of World ‘In Peril’ In Resignation – Forbes
How AI and social media sites are still collecting kids’ data despite privacy laws – Technical.ly
A bots-only social network triggers fears of an AI uprising – Washington Post
Stop panicking about AI. Start preparing - The Economist
See ChatGPTs Hidden Bias about your State or City – Washington Post
Knowledge Collapse – A gradual narrowing of accessible information, along with a declining awareness of alternative or obscure viewpoints. With each training cycle, new AI models increasingly rely on previously produced AI-generated content, reinforcing prevailing narratives and further marginalizing less prominent perspectives. The resulting feedback loop creates a cycle where dominant ideas are continuously amplified while less widely-held (and new) views are minimized. Underrepresented knowledge becomes less visible – not because it lacks merit, but because it is less frequently retrieved and less often cited. (also see “Synthetic Data”).
When teams attempt to make AI appear human, users come to expect human-level performance, which these systems can't deliver. Currently available LLM systems cannot provide the experiences that users associate with human interaction, such as genuine empathy, emotional connection, or confidentiality. Users expect humanized AI to disagree, challenge assumptions, and maintain consistent preferences, as a human would. Instead, LLMs default to validation and agreeableness, creating a false sense of understanding while failing to provide the critical feedback users need. AI technology also lacks effective long-term planning capabilities. -Caleb Sponheim writing for NNGroup
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved