May you live
/May you live all the days of your life. -Jonathan Swift
May you live all the days of your life. -Jonathan Swift
What: We'll explore the pedagogical thinking behind these platforms, how they approach the challenge of balancing AI automation with human editorial judgment, and what responsible use might look like in 6–12 and higher education settings. Importantly, we'll also discuss the background and history of ITN, and broader questions educators should ask before recommending a platform to students. Come ready to think — not just about news and bias, but about the tools and organizations being built to navigate our news landscape today.
Who: Wesley Fryer, an educational technology “early adopter / innovator.”
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Media Education Lab
What: Join our panel of journalism and legal experts to discuss the challenges of covering police and U.S. Immigration and Customs Enforcement (ICE) activity in local neighborhoods. Among other topics, we will discuss safety concerns for journalists on the scene; the importance of building trust with affected communities; the First Amendment protections at play; and how to best fulfill the critical need for local reporting.
Who: Erica Moura, Simmons University; Sawyer Loftus, Bangor Daily News; Alexa Millinger, Hinckley Allen; Renee Griffin, Reporters Committee for Freedom of the Press.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: New England First Amendment Coalition
What: This workshop will help you learn how to prioritize things and give you a clear formula to be successful on social media.
Who: Ray-Sidney Smith, Digital Marketing Strategist, Hootsuite Global Brand Ambassador.
When: 10 am, Eastern
Where: Zoom
Cost: $45
Sponsor: Small Business Development Center, Duquesne University
What: How can student journalists effectively, responsibly and legally pursue those stories? This virtual event is open to any student journalist or educator, whether you’re just getting started on this topic or already deep into your reporting.
Who: Julie K. Brown, whose dogged reporting for the Miami Herald helped bring much of the Jeffrey Epstein story to light.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Student Press Law Center
What: We will unpack our speaker’s latest work at the Reuters Institute at the University of Oxford, where he is exploring how foundation models and publishers are reshaping the information ecosystem in the era of generative AI. This conversation will move beyond headlines and deal announcements to examine power dynamics, long-term incentives, and the structural shifts underway.
Who: Madhav Chinappa, senior executive consultant and researcher at the Reuters Institute.
When: 10 am, Eastern
Where: Zoom
Cost: Free to members
Sponsor: International News Media Association
What: In this session, we'll cover: An overview of AI and ChatGPTs Best practices for writing good prompts Demos of content creation, data analysis, and image generation How to discover use cases of ChatGPT at work.
Who: Juliann Igo, GTM at OpenAI.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: In this session, we’ll move beyond the basics and explore more advanced ways to apply ChatGPT in your day-to-day work. You’ll see practical examples for improving productivity, supporting student engagement, and building efficient workflows using ChatGPT. We’ll also demonstrate additional features and real-world use cases that help teachers, staff, and administrators get more value from the platform in both classroom and operational settings.
Who: Kirk Gulezian, Education & Government, OpenAI.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: This session will combine reflections from Lina’s career with practical insights, encouraging participants to discover the strength of their own voices and use storytelling as a powerful tool for expression and change.
Who: Lina Rozbih, Senior Editor and Anchor at Voice of America, and an award-winning Afghan. journalist,
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Nobel Navigators
What: Join us to get a firsthand look at how Adobe Learning Manager brings AI to every stage of the learning journey - including personalized recommendations, deep semantic search, conversational AI Assistants, and AI‑driven coaching for role‑based practice.
Who: Justin Seeley, Learning Evangelist, Adobe.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Adobe Learning Manager
What: Good stories intrinsically have a structure our brains are looking for. With these 5 key questions, you can make sure you hit those key points on an idea you have, your work in progress, or a book you've already written.
Who: Jennifer Crosswhite, owner and CEO of Tandem Services.
When: 1:30 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Author Learning Center
What: Learn how to conduct deep research for report writing, organize your work with Projects, and build custom GPTs to automate tasks.
Who: Juliann Igo GTM, OpenAI
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Open AI Academy
What: A practical discussion on how agentic AI can strengthen your digital experience. We’ll break down what this emerging capability really means for government, how it can empower your teams, and how to introduce it responsibly and transparently.
Who: Kimberly Brandt, Deputy Administrator & Chief Operating Officer, Centers for Medicare & Medicaid Services; Kris Saling, Chief Technology Advisor, Manpower & Reserve Affairs, U.S. Army; Kelvin Brewer, Director, Public Sector Sales Engineering, Ping Identity; Andy MacIsaac, Senior Strategic Solutions Manager, Government & Education, Laserfiche; Luke Norris, Vice President, Platform Strategy & Digital Transformation, Granicus; Bryan Rosensteel Head of Public Sector Product Marketing, Wiz.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: GovLoop
What: Participants will learn how search engines work, the varieties of search engines, and how to craft advanced search queries to find exactly what they are looking for on the internet, discover news sources of information, and uncover information hiding in plain sight.
When: 6 pm, Eastern
Where: Zoom
Cost: Free to members
Sponsor: National Association of Hispanic Journalists
What: This webinar will focus on building AI literacy in academia and exploring how AI can be responsibly integrated into research, teaching, and institutional practices.
Who: Anjali Sam, Lead Product Manager, Cactus Communications; Vasundara BN Project Manager, Cactus Communications.
When: 6:30 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Cactus Communications
What: In this session you will learn about the state of AI adoption around the world and the concerns it brings as those are reflected across the industry. You will also learn about several threats, some of which were discovered and published only latterly. We will review the solutions that an organization can and should put in place to allow it to utilize the full power of Agentic AI while still protect its data and business.
Who: Dror Zelber, VP Product Marketing, Radwre.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Solutions Review
What: We delve into the core principles of accessibility, exploring real-world examples of disabilities and situational challenges users face. From understanding WCAG standards to addressing specific populations, we’ll equip you with actionable insights to create truly accessible websites.
Who: Jennie Martin, Front-End Development Manager, CPACC, DHS 508 Trusted Tester.
When: 1:00 pm
Where: Zoom
Cost: Free
Sponsor: Firespring
What: Join us for a beginner-friendly introduction to Codex: the AI system that powers code generation. We’ll walk through what Codex is, how people are using it in real workflows, and how it can help you move faster across everyday tasks.
Who: Ankur Kumar, Codex Deployment Engineer, OpenAI
When: 1:00 pm
Where: Zoom
Cost: Free
Sponsor: Open AI Academy
What: Topics will include: The opportunities and savings it offers; Ethical as well as practical concerns; Tips for safe and helpful usage; Red flags every author must be aware of.
Who: Book marketing advisor Beth Kallman Werner of Author Connections.
When: 1:30 pm
Where: Zoom
Cost: Free
Sponsor: Author Learning Center
What: Learn: How AI agents differ from other AI tools, and how they automate administrative tasks; What safeguards to put in place to protect institutional data and maintain trust; Which strategies allow you to integrate agents into existing workflows; How to support staff members who may be concerned about AI’s impact on their roles.
Who: Ian Wilhelm, Deputy Managing Editor, The Chronicle of Higher Education; Phil Ventimiglia, Chief Innovation Officer; Georgia State University; David Weil, Senior Vice President for Strategic Services and Initiatives Ithaca College.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Chronicle of Higher Ed
What: We’ll begin with a briefing on emerging state-level policy initiatives, including small business advertising tax credits, government advertising set-asides, journalism fellowship programs and employment incentives, and highlight where momentum is building cross the country. The session will also cover effective ways to engage policymakers.
Who: Matt Pearce, Director of Policy for Rebuild Local News; Susan Patterson Plank, Director of Government Affairs and Partnerships for Rebuild Local News.
When: 3:30 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Online News Association
What: By attending this class, you will learn: The breadth of consumer reporting; How to identify and evaluate potential stories; The process of verifying claims to build strong, accurate reports.
Who: Sarah Guernelli, WPRI 12.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: New England First Amendment Coalition
What: Join a panel of experts to examine the possibilities that enable remote access, discuss the distinctions between identifiable audiences and the public, and the potential of virtual screening/access/reading rooms.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Open Copyright Education Advisory Network (OCEAN)
"This is love: Not that we loved God. It is that he loved us and sent his Son to give his life to pay for our sins." 1 John 4:10
“In this is love..” or another translation could be “In this way is seen the true love."
God didn’t look down and say, “Boy, I see you love me. I think I’ll love you.” Or “You’re a nice guy, I really like that.”
Instead:
You were rebellious, arrogant, self-centered. God said, “I love you.”
You ignored him, fought him, were bored with him. God said, “I love you.”
You spit in his face, yelled at him, shook your fist. God said, “I love you.”
That’s what John means here.
We see what real love is by looking at what God did. He loved us with a desire to restore us, to make us whole.
You’ll know you have found the right community when all the talk is about Jesus and what he did with his life—not someone’s opinions about what you ought to be doing with yours -Bob Goff
Embedding - The conversion of tokens into numbers (vectors) so an LLM can look at their relationships.
A study found over 60 percent of surveyed judges have used AI in their work. Around 22 percent of the judges said they used AI daily or weekly in their duties. All the judges said that they do not rely on AI to decide how they will rule in case, but that after they decide, the tools can provide a starting point to write a legal document — not unlike a template that judges or clerks might use when writing routine orders — that they can then edit. -Washington Post
The soul of a new machine Can AI be taught morality? – Deseret News
Evaluating the ethics of autonomous systems – MIT News
Prompt injection in manuscripts: exploiting loopholes or crossing ethical lines? – Springer
The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code – Observer
Is There an Ethical Path for AI Art? – Hyperallergic
Does A.I. Need a Constitution? – The New Yorker
Claude has an 80-page “soul document.” Is that enough to make it good? - Vox
Can You Teach an A.I. Model to Be Good? – New York Times
AI aggregation without accountability corrupts knowledge – Forensic Scientometrics
AI Ethics Dilemmas with Real Life Examples – AI Multiple
Anthropic AI Safety Researcher Warns Of World ‘In Peril’ In Resignation – Forbes
Meet the One Woman Anthropic Trusts to Teach AI Morals - Wall Street Journal
Publisher under fire after ‘fake’ citations found in AI ethics guide – The Times
Is it ok for politicians to use AI? Survey shows where the public draws the line – The Conversation
It’s Easier to Cheat When You Can Blame AI – Wall Street Journal
Views of AI Around the World - Pew Research
AI can imitate morality without actually possessing it, new philosophy study finds – University of Kansas
How Rules for Publicly Available Data Are Shaping the Future of AI – Data Innovation
Senior European journalist suspended for publishing AI-generated quotes – Euro News
AI Agents – These chatbots have the ability not only to answer questions and provide information, but to act on users' behalf in the background, autonomously. Users provide a goal (from researching competitors to virtual assistant functions like buying a car or planning a vacation), and the agent generates a task list and starting to work by breaking down the overall goal into smaller steps. The ability to understand complex instructions is crucial for agentic AI to be effective. Rather than passive processors of language, these proactive active agents can produce practical, real-world applications in uncertain but data-rich environments as it interacts with external tools and APIs. Agents are not the same as “AI copilots” which can collaborate with users but don’t make decisions on their own as agents can do. They are also not as powerful as Agentic AI, which can act more autonomously.
While different AI platforms behave in subtly different ways, all of them nudge people away from the most extreme positions and towards more moderate and expert-aligned stances. This remains true after accounting for partisan differences in AI platform usage and chatbots’ sycophantic tendencies. -John Burn-Murdoch, chief data reporter for the Financial Times
Manage your energy, not just your time.
Anthropic Races to Contain Leak of Code Behind Claude AI Agent – Wall Street Journal
‘Vibe coding’ may offer insight into our AI future – The Harvard Gazette
Entire Claude Code CLI source code leaks thanks to exposed map file – ArsTechnica
Vibe Coding 101: How to Build Apps and More With AI – PC Mag
So where are all the AI apps? – Answer.AI
How to Make Claude Code Improve from its Own Mistakes – Towards Data Science
Coding After Coders: The End of Computer Programming as We Know It – New York Times
What do Coders do after AI? - Anil Dash
Generative AI changes how much time developers spend on coding and project management – MIT Management
Accessibility and AI Agents – Conor
What Claude Code Actually Chooses – Amplifying
Five Ways People Are Using Claude Code – New York Times
Anyone can code now. What will you build? – Washington Post
He Vibe-Coded a Crisis for Higher Education – Chronicle of Higher Ed
The Software Development Lifecycle Is Dead (thanks to AI agents) – Boris Tane
On cognitive debt & AI-generated codebases – Nate Meyvis
OpenAI launches AI-powered coding app exclusively for Apple computers – CNBC
Power Prompts in Claude Code – Hardik Pandya
IBM stock falls after Anthropic says AI can now modernize old software – Fast Company
Training Data – A massive amount of text is initially fed into the system to train it. The AI uses this info to create a map of relationships, so it can make predictions. Giving the AI lots of data means more options, which can lead to more creative results. However, this can also make it more vulnerable to hackers and hallucinations. Using more curated, locked-down data sets makes AI models less vulnerable and more predictable but also less creative.
An AI system that wrote a paper without human involvement that passed peer review for a workshop at the 2025 International Conference on Learning Representations, a top-tier venue in the field of machine learning. The paper was mediocre, according to experts. But its existence marks a turning point that the scientific community is only beginning to grapple with: AI has quickly moved from assisting scientists to attempting to be one. What if one day the AI-generated papers stop being mediocre? -Scientific American
AI Essentials – Google (through Coursera)
AI Fluency for Educators (Anthropic)
AI Fluency for nonprofits (Anthropic)
AI Fluency for Students (Anthropic)
AI Fluency: Framework & Foundations (Anthropic)
AI for Everyday Living: A Beginner Workshop for Older Adults (OpenAI Academy)
AI For Everyone (Coursera)
ChatGPT 101: The Complete Beginner's Guide and Masterclass (Udemy)
ChatGPT for Education 101 (OpenAI Academy)
ChatGPT for Education 102 (OpenAI Academy)
ChatGPT for Government 101 (OpenAI Academy)
ChatGPT for Government 102 (OpenAI Academy)
ChatGPT Foundations: Getting Started with AI (OpenAI Academy)
Claude 101 (Anthropic)
Claude Code in Action (Anthropic)
Exploring ChatGPT in 2 hours: Practical Guide for Beginners (Udemy)
Generative AI for Data Analysts – IBM (through Coursera)
Generative AI for Data Scientists – Google (through Coursera)
Generative AI for Everyone (Coursera)
Generative AI with Large Language Models (Coursera)
Introduction to Claude Cowork (Anthropic)
Intro to Generative AI: A Beginner’s Primer on Core Concepts - Google (through Coursera)
Introduction to Generative AI (Google)
Learn how to use ChatGPT to Make Money! (Udemy)
Make Teaching Easier with Artificial Intelligence (Udemy)
Master Basics of ChatGPT & OpenAI API (Udemy)
Microsoft AI Product Manager – Microsoft (through Coursera)
Prompt Engineering for ChatGPT – Vanderbilt University (through Coursera)
Prompting with Purpose (OpenAI Academy)
Small Business Jam: Online AI Skill Lab (OpenAI Academy)
Teaching AI Fluency (Anthropic)
Tokens - Think of a token as the root of a word. “Creat” would be the “root” of words like create, creative, creator, creating, and creation. An LLM looks for correlations — words that go together like giraffe and neck. This group of words are represented by a token. A single word might fall into many tokens since the word might have multiple meanings and the subwords of this word will likely correlate to many other subwords. One token generally corresponds to ~4 characters of text for common English text. Examples
Stanford researchers say chatbots are overly agreeable when giving interpersonal advice, affirming users' behavior even when harmful or illegal. On top of that, users could not distinguish when an AI was acting overly agreeable. The study’s lead author worries that the sycophantic advice will worsen people’s social skills and ability to navigate uncomfortable situations. “AI makes it really easy to avoid friction with other people.” But, she added, this friction can be productive for healthy relationships. More from Stanford
Large Language Models (LLMs). Computer programs that do one thing: predict the next “token.”
Training Data. A massive amount of text is initially fed into the system to train it.
Parameters. The internals rules and limitations learned from the training data.
Tokenization part 1: pre-training. The process of converting the raw training data (text, images, or audio) into small units called tokens.
Prompt. A user asks a question.
Tokenization part 2: inference. The process of converting the prompt (whether text, images, or audio) into small units called tokens.
Embedding. The conversion of tokens into numbers (vectors) so the computer can look at their relationships.
Vector databases. The storage and search engine for vector embeddings.
RAG. The system searches the vector database relevant to the prompt to prevent hallucinations and provide updated information.
Transformers. The core AI architecture that uses vectors to make a prediction about which token to generate next for the prompt.
A comfortable routine can turn on us, leaving our creativity stifled, dulling us to other possibilities. Lethargic and sleepwalking through life, boredom soon arrives. At the other end of the spectrum, we have those bungee-jumping thrill-seeking people. Tired of sexual escapades, gambling, and rock climbing, they might self-medicate to starve off the tedium. Then there are drugs that can stimulate many feelings: euphoria, depression, anxiety and even fear. In each case, the goal is to stimulate the brain’s dopamine reward pathway.
Psychologists tell us that the cure for chronic tedium is not to switch to constant high-sensation thrills. There is a sweet spot between boredom and anxiety called flow. As Dr. Richard Friedman writes:
“Flow happens when a person’s skills and talent perfectly match the challenge of an activity: playing in the zone, where there is total and unself-conscious absorption in the activity. Make the task too challenging and anxiety results; make it too easy and boredom emerges. Flow gets to the heart of fun. It’s not hard to see why the enforced tranquility of a Caribbean vacation could be a dreadful bore for a workaholic but bliss for a couch potato: temperament, as well as talent, must match the activity.”
A new study out of the UK finds a sharp rise in artificial intelligence models ignoring human instructions, evading safeguards and engaging in deceptive behavior. Researchers say between October 2025 and March 2026, there was five-fold increase in reported misbehavior, where AI agents acted against their users' direct orders. More from The Guardian
Parameters - The internals rules and limitations learned by an LLM from the training data. Parameters can be broken up into weights (strength of connections between tokens) and biases (outputs that are off in some way).
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved