Two Kinds of Prisoners
/In a consumer society, there are inevitably two kinds of slaves: the prisoners of addictions and the prisoners of envy. - Ivan Illich
In a consumer society, there are inevitably two kinds of slaves: the prisoners of addictions and the prisoners of envy. - Ivan Illich
To be human is not to have answers. It is to have questions—and to live with them. The machines can’t do that for us. - D. Graham Burnett writing in The New Yorker
In a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society that includes some of the most respected researchers in the field, more than three-quarters of respondents said the methods used to build today’s technology were unlikely to lead to A.G.I. Scientists have no hard evidence that today’s technologies are capable of performing even some of the simpler things the brain can do, like recognizing irony or feeling empathy. Claims of A.G.I.’s imminent arrival are based on statistical extrapolations — and wishful thinking. -New York Times
In some parts of American society, it is considered inappropriate for men to express any emotion save one—anger. When a man learns to express other feelings and not be so concerned about whether others think he is strong or “manly,” he takes a major step forward.
Sure, there’s a time and place to "come on strong and take no prisoners." But it's a denial of your humanity to oversimplify, hiding behind a narrow definition of manhood. Men are more complete when they are both tough and tender. Maturity comes with the understanding of which one is appropriate at what time.
Stephen Goforth
Three newsrooms on generating AI summaries for news - Harvard’s Nieman Lab
More than 2 years after ChatGPT, newsrooms still struggle with AI’s shortcomings – CNN
Think AI is bad for journalism? This story might change your mind: Letter from the Editor - Cleveland.com
The New York Times has reached an AI licensing deal with Amazon – New York Times
How this year’s Pulitzer awardees used AI in their reporting – Harvard’s Nieman Lab
ChatGPT referral traffic to publishers’ sites has nearly doubled this year – Digiday
Politico’s Newsroom Is Starting a Legal Battle With Management Over AI – Wired
Chicago Sun-Times Prints AI-Generated Summer Reading List With Books That Don't Exist – 404 Media
A New Report Takes On the Future of News and Search: AI’s impact on platforms and publishers - Columbia Journalism Review
Gannett Is Using AI to Pump Brainrot Gambling Content Into Newspapers Across the Country – Futurism
Americans largely foresee AI having negative effects on news, journalists – Pew Research Center
A startup is using AI to summarize local city council meetings – Columbia Journalism Review
Have journalists skipped the ethics conversation when it comes to using AI? – The Conversation
Tomorrow’s Publisher, a site about the future of news, is “powered by” an AI startup - Harvard’s Nieman Lab
Why some journalists are embracing AI after all - IBM
Musk's xAI "will pay Telegram $300 million to deploy its Grok chatbot on the messaging app. – Reuters
AI learns how vision and sound are connected, without human intervention – MIT
Teaching journalism students generative AI: why I switched to an “AI diary” this semester – Online Journalism Blog
Patch’s big AI newsletter experiment - Harvard’s Nieman Lab
Study Guide Supremacy Getting my news from ChatGPT - Columbia Journalism Review
Journalism is facing its crisis moment with AI. It might not be a bad thing. – Poynter
AI-Generated Content in Journalism: The Rise of Automated Reporting - TRENDS Research & Advisory
AI-Generated Fake Book List Seems Funny, but Reflects the Technology’s Danger to Journalism – Pen America
Politico’s Newsroom Is Starting a Legal Battle With Management Over AI – Wired
Journalists are using AI. They should be talking to their audience about it. – Poynter
Abstractive summarization (ABS) – A natural language processing summary technique generating new sentences not found in the source material. In contrast, extractive summarization sticks to the original text, identifying the important sections to produce a subset of sentences taken from the original text. Abstractive summarization is better when the meaning of the text is more important than exactness while extractive summarization is better when sticking to the original language is critical.
More AI definitions here
We make a living by what we get, we make a life by what we give. –Winston Churchill
A Culture War is Brewing Over Moral Concern for AI – Undark
In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights – Associated Press
Take Nature’s AI research test: find out how your ethics compare – Nature
‘We can’t tell if we’re being persuaded by a person or a program’ – University of Melbourne
AI poses new moral questions. Pope Leo says the Catholic Church has answers. – Washington Post
NBC will use Jim Fagan’s AI-generated voice for NBA coverage –The Verge
Why misuse of generative AI is worse than plagiarism – Springer
Israel’s A.I. Experiments in Gaza War Raise Ethical Concerns – New York Times
Anthropic just analyzed 700,000 Claude conversations — and found its AI has a moral code of its own – Venture Beat
I asked ChatGPT to invent 6 philosophical thought experiments – and now my brain hurts – Tech Radar
Anthropic study reveals LLM reasoning isn’t always what it seems – TechTalks
As they push ahead with AI, health leaders must set rules on use – American Medical Association
AI: Uses, Ethics and Limitations – KUAF
What Happens When People Don’t Understand How AI Works – The Atlantic
Have journalists skipped the ethics conversation when it comes to using AI? – The Conversation
The moral dimension of AI for work and workers – Brookings
My students think it’s fine to cheat with AI. Maybe they’re onto something. – Vox
AI poses new moral questions. Pope Leo says the Catholic Church has answers. – Washington Post
AI faces skepticism in end-of-life decisions, with people favoring human judgment – Medical Xpress
AI language model rivals expert ethicist in perceived moral expertise – Nature
Artificial Intelligence in courtrooms raises legal and ethical concerns – Associated Press
Bridging philosophy and AI to explore computing ethics - MIT
AI is Making Medical Decisions — But For Whom? – Harvard Magazine
The Solution to the AI Alignment Problem Is in the Mirror – Psychology Today
Our AI research findings carry important implications for the future of work. If employees consistently rely on AI for creative or cognitively challenging tasks, they risk losing the very aspects of work that drive engagement, growth, and satisfaction. Increased boredom, which our research showed following AI use, can also be a warning sign that these negative consequences might be on their way. The solution isn’t to abandon gen AI. Rather, it’s to redesign tasks and workflows to preserve humans’ intrinsic motivation while leveraging AI’s strengths. -Harvard Business Review
Narrow AI – This is use of artificial intelligence for a very specific task or a limited range of tasks. For instance, general AI would mean an algorithm that is capable of playing all kinds of board game while narrow AI will limit the range of machine capabilities to a specific game like chess or scrabble. Google Search queries, Alexa and Siri, answer questions by using narrow AI algorithms. They can often outperform humans when confined to known tasks but often fail when presented situations outside the problem space where they are trained to work. In effect, narrow AI can’t transfer knowledge from one field to another. The narrow AI techniques we have today basically fall into two categories: symbolic AI and machine learning.
More AI definitions here.
What: We will explain generative artificial intelligence and discuss its impact. You will gain a basic understanding of its shortcomings, as well as the ways it can be used effectively. We will discuss some of the tools available to you through Duke. You will leave the session understanding how to create prompts that will get you the best results in your conversations with the AI.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Duke University
What: This workshop will identify different types of AI stories and explore what distinguishes the best media coverage of artificial intelligence. The virtual event is geared toward editors in North America, South America, Africa, and Europe, anyone in charge of directing coverage, commissioning stories, or packaging and producing them.
Who: Tom Simonite edits technology coverage for The Washington Post from San Francisco; Bina Venkataraman serves as editor-at-large for opinion strategy and innovation at The Washington Post.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Pulitzer Center
What: In this engaging, hands-on session, you'll get a fun and interactive introduction to AI fundamentals—from understanding how large language models tokenize and process language, to exploring differences between traditional and deep machine learning. Through keyword exercises, mini-games, and thought-provoking prompts, you'll gain the confidence to identify real business challenges and discover where AI can truly make an impact in your organization.
Who: Gary Lamach Vice President, Client Solutions, ELB Learning.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Elb Learning
What: School superintendents have spent months planning curricula that may be altered by budget cuts, mass layoffs and mandates to eliminate programs promoting diversity, equity and inclusion for students. To help journalists report on these changes and how they’ll affect students and families,
expert panelists will provide context on new federal education policies and tips on finding the local angle in this important national story.
Who: Jill Barshay, writer/editor at Hechinger Report; Noelle Ellerson Ng, associate executive director of advocacy & governance at the School Superintendents Association; Stephen Provasnik, former deputy commissioner of the National Center for Education Statistics at the U.S. Department of Education; Keri Rodrigues, co-founder and president of National Parents Union.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: The National Press Foundation
What: This session will explore what it means to use AI responsibly. We'll discuss how different groups-students, faculty, and professionals-are engaging with AI and unpack challenges facing us all. These include concerns around academic integrity, data privacy, bias, hallucination, and evolving expectations around citation and copyright. Participants will leave with practical strategies for establishing course or departmental policies, modeling responsible AI use, and supporting student AI literacy.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Duke University
What: Join us for an insightful webinar on leveraging AI to enhance fundraising outreach, development operations, and data management. Learn how AI & Automation can streamline donor engagement, personalize outreach, and optimize data processes to drive more effective fundraising efforts. We’ll explore practical applications, best practices, and real-world examples to help your nonprofit maximize efficiency and impact. Don’t miss this opportunity to revolutionize your fundraising strategy with AI!
Who: Terry Cangelosi is a Senior Director and Head of Operations at Orr Group; Abby Carlson is a Director and the Head of Data Analytics & Management at Orr Group; Dani Cluff is the Channel Marketing Coordinator at Bloomerang.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Bloomerang
What: A discussion on the potential benefits and risks of AI companions, what the early research says about this emerging technology, and how policymakers can support responsible innovation.
Who: Alex Ambrose, Policy Analyst Moderator; Taylor Taylor Barkley, Director of Public Policy Abundance Institute; Melodi Dinçer Policy Counsel Tech Justice Law Project; Cathy Fang, PhD Student MIT Media Lab; Clyde Vanel, Assemblyman (D-NY).
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Information technology & Innovation Foundation
What: This session will provide participants with practical techniques for utilizing generative artificial intelligence to help with everyday work tasks. AI can help summarize meeting minutes, draft emails, brainstorm ideas, create images for slides, etc. We will provide how-to tips to get you started and showcase several useful tools.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Duke University
What: Artificial Intelligence is transforming the business landscape, offering vast potential and raising critical ethical challenges around bias, transparency, and privacy. This session will explore past missteps, key decision-making pressures, and best practices to ensure responsible, values-driven AI development.
Who: Sagnika Sen, Associate Professor at Penn State Great Valley & expert in AI and data-driven business strategy.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: The Small Business Development Center at Kutztown University of Pennsylvania
What: This webinar will help you become AI-literate in the concepts most immediately to impact your career so you can start the process of upskilling now and thrive in your career for years to come.
Who: Heather Mansfield, Founder of Nonprofit Tech for Good
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Nonprofit Tech for Good
What: A workshop on identifying, preserving and reporting on government data. In an era where federal data is at risk of disappearing or being altered, this training will equip you with the tools and knowledge to safeguard critical information.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Sunlight Research Center, Data Rescue Project, MuckRock, and the Data Curation Network
What: How AI can work as your research partner, helping to brainstorm ideas, mine data, uncover angles and streamline workflows.
Who: Harriet Meyer, an experienced financial journalist.
When: 8 am, Eastern
Where: Zoom
Cost: £5.00
Sponsor: Women in Journalism
What: How the Aftonbladet newsroom built an AI hub, trained Prompt Queens, and experimented with everything from editorial copilots to US election chatbots. Some ideas failed fast, others became surprise hits. This is the story of what worked, what didn’t, and what they learned along the way.
Who: Aftonbladet’s Deputy Publisher and Director of Editorial AI & Innovation Martin Schori.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Online News Association
What: Concrete strategies to equip journalists with the tools they need to navigate leaks with integrity, rigor, and security.
Who: Robert Libetti, a journalist and filmmaker who was part of the 2025 Nieman class at Harvard.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Harvard’s Shorenstein Center & The Journalist's Resource.
What: In this highly experiential session, you’ll explore how to create learning experiences that go beyond transferring knowledge to build real skills that lead to behavior change. Through interactive examples—including eLearning scenarios, AI-enabled practice, structured feedback, and safe, realistic rehearsal with AI—you’ll experience first-hand what effective skills practice looks like. You’ll discover how AI can help scale and support these methods, enabling learners to build skills faster and more effectively. You’ll leave with practical strategies to help people not just know what to do, but actually do it.
Who: Danielle Wallace Chief Learning Strategist, Beyond the Sky Custom Learning.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Training Magazine Network
What: The conversation will feature real-world examples, actionable insights, and expert tips to help you craft messages that resonate across regions, spark journalist interest, and track the performance of your campaigns.
Who: Kelvin Chan, Business Writer, The Associated Press (London); Zoë Clark, Sr. Partner, Head of Media and Influence, Tyto PR (London); Natassia Culp, Global Corporate Communications Lead, Wasabi Technologies (US); John Lerch, Sr. Director, Global Marketing, Tigo Energy (US).
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Business Wire
What: Sales in local for-profit and nonprofit news has shifted from selling ad space or pageviews to addressing psychological, strategic, and narrative-driven challenges. This session outlines how news organizations can adopt a sales approach rooted in sponsor alignment and value-based storytelling. Participants will learn how effective sales professionals operate as consultants who understand emotional decision-making, anticipate objections, and build trust across segmented and evolving markets. The session will focus on tailored communication strategies grounded in clarity, relevance, and long-term impact.
Who: Richard Brown, the Chief of Growth and Innovation at Wisconsin Watch.
When: 1 pm, Eastern
Where: Zoom
Cost: $35
Sponsor: Online Media Campus
What: A data-backed look at how brands can use authentic, inclusive visuals to bring their sustainability stories to life. Backed by new VisualGPS research and real-world examples, this session will explore how to translate complex concepts into powerful creative that resonates across channels.
Who: Tristen Norman, head of creative, Americas, Getty Images; Tawnya Crawford, VP and general manager of custom solutions, Getty Images.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Getty Images
What: This webinar will explore how libraries are expanding their roles to proactively foster integrity and support their institutions.
Who: Jason Openo, Dean, School of Health and Community Services Medicine at Hat College; Josh Seeland, Manager of Library Services at Assiniboine College; Jane Costello, Senior Instructional Design Specialist, the Centre for Innovation in Teaching and Learning, Memorial University of Newfoundland.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Springer Nature
According to Albert Ellis, the most common irrational ideas behind anger are the following:
1. Others must treat me considerately and kindly and in precisely the way I want them to treat me.
2. I must do well and win the approval of others or else I will rate as a rotten person.
3. The world and the people in it must arrange conditions under which I live, so that I get everything I want when I want it.
As their anger slows down, people should challenge irrational thoughts with statements such as:
What evidence exists for this? Why can't I stand this noise or this unfairness?
Gary Collins, Counseling and Anger
Impending death gives permission to feel, to let go, to recognize what’s important.
"A new study of a dozen A.I.-detection services by researchers at the University of Maryland found that they had erroneously flagged human-written text as A.I.-generated about 6.8 percent of the time, on average. 'At least from our analysis, current detectors are not ready to be used in practice in schools to detect A.I. plagiarism,' said Soheil Feizi, an author of the paper and an associate professor of computer science at Maryland." -New York Times
“What constitutes legitimate use of AI and what is out of bounds? Academic leaders don’t always agree whether hypothetical scenarios described appropriate uses of AI or not: For one example—in which a student used AI to generate a detailed outline for a paper and then used the outline to write the paper—the verdict (in a recent survey) was completely split.” -Inside Higher Ed
If your dreams don't scare you, they're not big enough.
Meta Signs Nuclear Power Deal to Fuel Its AI Ambitions – Wall Street Journal
Google is bringing ads to AI Mode – TechCrunch
How OpenAI, Google and AI makers are leaving the web behind - Axios
OpenAI Can Stop Pretending The company is great at getting what it wants—whether or not it’s beholden to a nonprofit mission. – The Atlantic
The New York Times has reached an AI licensing deal with Amazon – New York Times
AI race goes supersonic in milestone-packed week - Axios
OpenAI’s Ambitions Just Became Crystal Clear – The Atlantic
Google Unveils A.I. Chatbot, Signaling a New Era for Search - New York Times
Microsoft helped kick off the AI boom. It needs humans more than ever, its CEO says – Semafor
Journalist Karen Hao discusses her book 'Empire of AI' - NPR
The UAE and Saudi Arabia are pouring billions into U.S.-backed AI infrastructure – Wired
Google dominates AI patent applications - Axios
A Gemini-powered coding agent for designing advanced algorithms – DeepMind
Neural Networks (or artificial neural networks, ANNs) – Mathematical systems that can identify patterns in text, images and sounds. In this type of machine learning, computers learn a task by analyzing training examples. It is modeled loosely on the human brain—the interwoven tangle of neurons that process data and find complex associations. While symbolic artificial intelligence has been the dominant area of research for most of AI’s history with artificial neural networks, most recent developments in artificial intelligence have centered around neural networks. First proposed in 1944 by two University of Chicago researchers (Warren McCullough and Walter Pitts), they moved to MIT in 1952 as founding members of what’s sometimes referred to as the first cognitive science department. Neural nets remained a major research area in neuroscience and computer science until 1969. The technique enjoyed a resurgence in the 1980s, fell into disfavor in the first decade of the new century, and has returned stronger in the second decade, fueled largely by the increased processing power of graphics chips. Also, see “Transformers.”
More AI definitions here.
A North Dakota plumber had signed up to run his first half-marathon. But on the morning of the run Mike Kohler was sleepy. He wasn’t used to getting up so early. And he was wearing headphones, so he took off 15 minutes before he was supposed to—putting him with the runners competing in the full marathon. He started seeing signs that indicated he was on the wrong route, but he just assumed the two paths overlapped along the way.
Eventually, he realized his mistake but kept going. At the 13 mile mark he seriously thought about quitting. He had run as far as he had planned to run and even beat his time goal. He had nothing more to prove.
Instead, he finished the marathon.
“I’m just going to go for it, because why not?” Mike later told the Grand Forks Herald. “I’m already here, I’m already running, I’m already tired. Might as well try to finish it.”
He added, ”This just kind of proves you can do a lot more than what you think you can sometimes.”
AI is sparking a cognitive revolution. Is human creativity at risk? – Fast Company
Wired Envisions a Deepfake Future you’re not prepared for – Wired
AI Is Learning to Escape Human Control – Wall Street Journal
AI models hallucinate less than humans — just in “more surprising ways.” – Tech Crunch
Anthropic study reveals LLM reasoning isn’t always what it seems – BD Tech Talks
AI linked to explosion of low-quality biomedical research papers – Nature
The future of AI is in western Pennsylvania – Washington Post
LLMs are Making Me Dumber – Vincent Cheng
New cybersecurity risk: AI agents going rogue - Axios
AI therapy is a surveillance machine in a police state – The Verge
US government is using AI for unprecedented social media surveillance – New Scientist
Instagram's AI Chatbots Lie About Being Licensed Therapists – 404 Media
Why the AI Revolution Will Require Massive Energy Resources – AEI
Pedophiles Are Using AI To Turn Children’s Social Media Photos Into CSAM – Forbes
Becoming is a service of Goforth Solutions, LLC / Copyright ©2025 All Rights Reserved