AI Definitions: Embedding
/Embedding - The conversion of tokens into numbers (vectors) so an LLM can look at their relationships.
Embedding - The conversion of tokens into numbers (vectors) so an LLM can look at their relationships.
A study found over 60 percent of surveyed judges have used AI in their work. Around 22 percent of the judges said they used AI daily or weekly in their duties. All the judges said that they do not rely on AI to decide how they will rule in case, but that after they decide, the tools can provide a starting point to write a legal document — not unlike a template that judges or clerks might use when writing routine orders — that they can then edit. -Washington Post
The soul of a new machine Can AI be taught morality? – Deseret News
Evaluating the ethics of autonomous systems – MIT News
Prompt injection in manuscripts: exploiting loopholes or crossing ethical lines? – Springer
The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code – Observer
Is There an Ethical Path for AI Art? – Hyperallergic
Does A.I. Need a Constitution? – The New Yorker
Claude has an 80-page “soul document.” Is that enough to make it good? - Vox
Can You Teach an A.I. Model to Be Good? – New York Times
AI aggregation without accountability corrupts knowledge – Forensic Scientometrics
AI Ethics Dilemmas with Real Life Examples – AI Multiple
Anthropic AI Safety Researcher Warns Of World ‘In Peril’ In Resignation – Forbes
Meet the One Woman Anthropic Trusts to Teach AI Morals - Wall Street Journal
Publisher under fire after ‘fake’ citations found in AI ethics guide – The Times
Is it ok for politicians to use AI? Survey shows where the public draws the line – The Conversation
It’s Easier to Cheat When You Can Blame AI – Wall Street Journal
Views of AI Around the World - Pew Research
AI can imitate morality without actually possessing it, new philosophy study finds – University of Kansas
How Rules for Publicly Available Data Are Shaping the Future of AI – Data Innovation
Senior European journalist suspended for publishing AI-generated quotes – Euro News
AI Agents – These chatbots have the ability not only to answer questions and provide information, but to act on users' behalf in the background, autonomously. Users provide a goal (from researching competitors to virtual assistant functions like buying a car or planning a vacation), and the agent generates a task list and starting to work by breaking down the overall goal into smaller steps. The ability to understand complex instructions is crucial for agentic AI to be effective. Rather than passive processors of language, these proactive active agents can produce practical, real-world applications in uncertain but data-rich environments as it interacts with external tools and APIs. Agents are not the same as “AI copilots” which can collaborate with users but don’t make decisions on their own as agents can do. They are also not as powerful as Agentic AI, which can act more autonomously.
While different AI platforms behave in subtly different ways, all of them nudge people away from the most extreme positions and towards more moderate and expert-aligned stances. This remains true after accounting for partisan differences in AI platform usage and chatbots’ sycophantic tendencies. -John Burn-Murdoch, chief data reporter for the Financial Times
Manage your energy, not just your time.
Anthropic Races to Contain Leak of Code Behind Claude AI Agent – Wall Street Journal
‘Vibe coding’ may offer insight into our AI future – The Harvard Gazette
Entire Claude Code CLI source code leaks thanks to exposed map file – ArsTechnica
Vibe Coding 101: How to Build Apps and More With AI – PC Mag
So where are all the AI apps? – Answer.AI
How to Make Claude Code Improve from its Own Mistakes – Towards Data Science
Coding After Coders: The End of Computer Programming as We Know It – New York Times
What do Coders do after AI? - Anil Dash
Generative AI changes how much time developers spend on coding and project management – MIT Management
Accessibility and AI Agents – Conor
What Claude Code Actually Chooses – Amplifying
Five Ways People Are Using Claude Code – New York Times
Anyone can code now. What will you build? – Washington Post
He Vibe-Coded a Crisis for Higher Education – Chronicle of Higher Ed
The Software Development Lifecycle Is Dead (thanks to AI agents) – Boris Tane
On cognitive debt & AI-generated codebases – Nate Meyvis
OpenAI launches AI-powered coding app exclusively for Apple computers – CNBC
Power Prompts in Claude Code – Hardik Pandya
IBM stock falls after Anthropic says AI can now modernize old software – Fast Company
Training Data – A massive amount of text is initially fed into the system to train it. The AI uses this info to create a map of relationships, so it can make predictions. Giving the AI lots of data means more options, which can lead to more creative results. However, this can also make it more vulnerable to hackers and hallucinations. Using more curated, locked-down data sets makes AI models less vulnerable and more predictable but also less creative.
An AI system that wrote a paper without human involvement that passed peer review for a workshop at the 2025 International Conference on Learning Representations, a top-tier venue in the field of machine learning. The paper was mediocre, according to experts. But its existence marks a turning point that the scientific community is only beginning to grapple with: AI has quickly moved from assisting scientists to attempting to be one. What if one day the AI-generated papers stop being mediocre? -Scientific American
AI Essentials – Google (through Coursera)
AI Fluency for Educators (Anthropic)
AI Fluency for nonprofits (Anthropic)
AI Fluency for Students (Anthropic)
AI Fluency: Framework & Foundations (Anthropic)
AI for Everyday Living: A Beginner Workshop for Older Adults (OpenAI Academy)
AI For Everyone (Coursera)
ChatGPT 101: The Complete Beginner's Guide and Masterclass (Udemy)
ChatGPT for Education 101 (OpenAI Academy)
ChatGPT for Education 102 (OpenAI Academy)
ChatGPT for Government 101 (OpenAI Academy)
ChatGPT for Government 102 (OpenAI Academy)
ChatGPT Foundations: Getting Started with AI (OpenAI Academy)
Claude 101 (Anthropic)
Claude Code in Action (Anthropic)
Exploring ChatGPT in 2 hours: Practical Guide for Beginners (Udemy)
Generative AI for Data Analysts – IBM (through Coursera)
Generative AI for Data Scientists – Google (through Coursera)
Generative AI for Everyone (Coursera)
Generative AI with Large Language Models (Coursera)
Introduction to Claude Cowork (Anthropic)
Intro to Generative AI: A Beginner’s Primer on Core Concepts - Google (through Coursera)
Introduction to Generative AI (Google)
Learn how to use ChatGPT to Make Money! (Udemy)
Make Teaching Easier with Artificial Intelligence (Udemy)
Master Basics of ChatGPT & OpenAI API (Udemy)
Microsoft AI Product Manager – Microsoft (through Coursera)
Prompt Engineering for ChatGPT – Vanderbilt University (through Coursera)
Prompting with Purpose (OpenAI Academy)
Small Business Jam: Online AI Skill Lab (OpenAI Academy)
Teaching AI Fluency (Anthropic)
Tokens - Think of a token as the root of a word. “Creat” would be the “root” of words like create, creative, creator, creating, and creation. An LLM looks for correlations — words that go together like giraffe and neck. This group of words are represented by a token. A single word might fall into many tokens since the word might have multiple meanings and the subwords of this word will likely correlate to many other subwords. One token generally corresponds to ~4 characters of text for common English text. Examples
Stanford researchers say chatbots are overly agreeable when giving interpersonal advice, affirming users' behavior even when harmful or illegal. On top of that, users could not distinguish when an AI was acting overly agreeable. The study’s lead author worries that the sycophantic advice will worsen people’s social skills and ability to navigate uncomfortable situations. “AI makes it really easy to avoid friction with other people.” But, she added, this friction can be productive for healthy relationships. More from Stanford
Large Language Models (LLMs). Computer programs that do one thing: predict the next “token.”
Training Data. A massive amount of text is initially fed into the system to train it.
Parameters. The internals rules and limitations learned from the training data.
Tokenization part 1: pre-training. The process of converting the raw training data (text, images, or audio) into small units called tokens.
Prompt. A user asks a question.
Tokenization part 2: inference. The process of converting the prompt (whether text, images, or audio) into small units called tokens.
Embedding. The conversion of tokens into numbers (vectors) so the computer can look at their relationships.
Vector databases. The storage and search engine for vector embeddings.
RAG. The system searches the vector database relevant to the prompt to prevent hallucinations and provide updated information.
Transformers. The core AI architecture that uses vectors to make a prediction about which token to generate next for the prompt.
A comfortable routine can turn on us, leaving our creativity stifled, dulling us to other possibilities. Lethargic and sleepwalking through life, boredom soon arrives. At the other end of the spectrum, we have those bungee-jumping thrill-seeking people. Tired of sexual escapades, gambling, and rock climbing, they might self-medicate to starve off the tedium. Then there are drugs that can stimulate many feelings: euphoria, depression, anxiety and even fear. In each case, the goal is to stimulate the brain’s dopamine reward pathway.
Psychologists tell us that the cure for chronic tedium is not to switch to constant high-sensation thrills. There is a sweet spot between boredom and anxiety called flow. As Dr. Richard Friedman writes:
“Flow happens when a person’s skills and talent perfectly match the challenge of an activity: playing in the zone, where there is total and unself-conscious absorption in the activity. Make the task too challenging and anxiety results; make it too easy and boredom emerges. Flow gets to the heart of fun. It’s not hard to see why the enforced tranquility of a Caribbean vacation could be a dreadful bore for a workaholic but bliss for a couch potato: temperament, as well as talent, must match the activity.”
A new study out of the UK finds a sharp rise in artificial intelligence models ignoring human instructions, evading safeguards and engaging in deceptive behavior. Researchers say between October 2025 and March 2026, there was five-fold increase in reported misbehavior, where AI agents acted against their users' direct orders. More from The Guardian
Parameters - The internals rules and limitations learned by an LLM from the training data. Parameters can be broken up into weights (strength of connections between tokens) and biases (outputs that are off in some way).
Love anything that lives—a person, a pet, a plant—and it will die. Trust anybody and you may be hurt; depend on anyone and that one may let you down. The price of cathexis (letting something or someone become important to us) is pain. If someone is determined not to risk pain, then such a person must do without many things: having children, getting married, the ecstasy of sex, the hope of ambition, friendship - all that makes life alive, meaningful and significant.
Move out or grow in any dimension and pain as well as joy will be your reward. A full life will be full of pain. But the only alternative is not to live fully or not to live at all. The attempt to avoid legitimate suffering lies at the root of all emotional illness.
M Scott Peck, The Road Less Traveled
The pain you feel is a reminder that you are alive, living life. You’re not on the sidelines; you are in the game. Let it be a stepping stone instead of a stumbling block.
How to Make Claude Code Improve from its Own Mistakes
Bayesian Thinking for People Who Hated Statistics
If you’ve built an LLM that predicts well but fails when turned into a decision
I Deleted My Vector Database and My RAG System Got Better
AI Definitions: Model Context Protocol (MCP)
Why Prompt Engineering Hits a Wall and where we go next
How to Build a Production-Ready Claude Code Skill (a reusable prompt with structure)
U.S. NGA moves to replace manual geospatial analysis with AI
Strait of Hormuz crisis drives demand for commercial geospatial intelligence
Generative AI changes how much time developers spend on coding and project management
AI delegation: The development of protocols in the emerging agentic web.
Building Interactive Geospatial Dashboards Using Folium
The state of AI in today’s geospatial industry
The human foundations of AI: rethinking skills, structure and strategy
Automate your exploratory data analysis workflow with these 5 ready-to-use Python scripts
How to Become an AI Engineer in 2026
The Gap Between Junior and Senior Data Scientists Isn’t Code
What: Best practice when reporting on domestic abuse and sexual violence.
When: 9 am, Eastern
Where: Zoom
Cost: Free
Sponsor: Welsh Women’s Aid
What: Quickly and efficiently make professional infographics. To stay current, designers want to adopt “Agile methodologies.” This innovative/cutting-edge workshop shows (step-by-step) how to use Agile to turn words into professional, compelling infographics quickly. Learn the proven techniques and tools the pros use to do more with less.
Who: Mike Parkinson Author, Owner, Billion Dollar Graphics.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Training Magazine
What: Our expert panel will break down how B2B media companies are responding to media landscape changes and share actionable strategies for journalists to thrive. We’ll discuss: Which tasks are easily automated to save you time; Where to lean on human strengths like deep connections and context; The impact of AI on source verification and research; The B2B media roadmap for an AI-integrated future.
Who: Brendan Howard, Freelance Podcast Host; Maria Korolov, Technology Journalist & Author; Alexis Gajewski, Associate Director of Newsroom Operations, Endeavor B2B; Priyanka Rao, Founder & CEO, AI Champions.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: American Society of Business Publication Editors
What: Ten years ago, the Panama Papers exposed the hidden offshore financial system used by politicians, billionaires and criminals around the world. Its impact continues to shape the fight against financial secrecy today.A conversation about how the investigation unfolded, the reforms it triggered and why the struggle for transparency is far from over.
Who: ICIJ Executive Director Gerard Ryle; international tax justice expert Tove Maria Ryding.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: International Consortium of Investigative Journalist
What: We will evaluate the angles of attack against journalists and news organizations and discuss how to exercise First Amendment rights in the face of hostility and near constant threats.
Who: Jeffrey Hermes, Deputy Director, Media Law Resource Center; George Freeman, Executive Director, Media Law Resource Center.
When: 6 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: National Association of Hispanic Journalists
What: How the publisher's advertiser relationships were tested in real time as they covered one of the most consequential local news stories in the country: the federal immigration enforcement operations that have drawn national attention.
Who: Brian Kennett, VP and head of digital advertising and agency services at the Minnesota Star Tribune; Dave Karabag, regional VP of advertising sales at the Orlando Sentinel.
When: 10 am, Eastern
Where: Zoom
Cost: Free to members
Sponsor: International News Media Association
What: This session will provide a practical walkthrough of the platform and show how teachers, school staff, and district administrators can begin using AI to support their day-to-day work.
Who: Kirk Gulezian, Education & Government, OpenAI.
When: 11 am, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: A hands-on session on how teams use Workspace Analytics in ChatGPT Enterprise to run stronger rollouts—finding where adoption is gaining traction, where it’s stalling, and what to do next.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: The average growth rate for new social media followers ranges from .64% to 3% per month, depending upon the platform. In other words, the era of organic growth on social media is over. To grow your nonprofit’s following on social media, you need to make a concerted effort to let your supporters and donors know how to find your nonprofit on social media. This free 20-minute webinar will present eight ways to grow your nonprofit’s following on social media.
Who: Heather Mansfield, Founder of Nonprofit Tech for Good.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Nonprofit Tech
What: Built by Military Veterans in Journalism (MVJ) in collaboration with the Disabled Journalists Association (DJA), Fix the Frame: A Newsroom Guide to Disability Narratives is a practical resource designed to help journalists produce more accurate, respectful, and inclusive reporting on disability.
Who: Zack Baddorf (MVJ); Cara Reedy (DJA); Rebecca Cokley, Ford Foundation; Sam Kille; Beth Haller; Russell Midori (MVJ); Devon Lancia (MVJ).
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Military Veterans in Journalism
What: We’ll explore how leaders can automate coaching prompts, personalize development pathways, and measure impact without losing the human touch. The combination of AI and coaching creates an ecosystem of intelligent reinforcement—keeping learners engaged long after training ends.
Who: Tim Hagen is the President of Progress Coaching.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenSesame
What: We will explore how AI is accelerating both cyberattacks and defenses, why SMBs and mid-market organizations are increasingly targeted, and how the gap between traditional security tools and AI-era threats continues to widen.
Who: Justin Vredeveld, Business Development Manager; Jared Olson, Security Team Lead; Blake Mielke, Incident Response Lead.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: Ontech Systems
What: How using AI to analyze large collections of data can shed light on the efficacy of professional learning.
Who: Lisa Schmucki, Founder and CEO of edWeb.net; Thor Prichard, President and CEO of Clarity Innovations.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: EdWeb.net
May your trails be crooked, winding, lonesome, dangerous, leading to the most amazing view. -Edward Abbey
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved