My shame
/Nothing gets between me and my shame. – Lesley Wheeler
Nothing gets between me and my shame. – Lesley Wheeler
The Turing Test - Proposed by computing pioneer Alan Turing in 1950, the Turing Test measures whether a computer program could fool a human into believing it was human. This has led to what some people are calling the “Reverse Turing Test” where writers go out of their way to prove they are human.
If the consequence of AI cheating extends beyond the assignment — to course failure, a transcript notation, suspension, or expulsion — the institution has entered disciplinary territory. The student is owed due process, regardless of what the professor calls it or how the institution categorizes the proceeding. The administrative must)distinguish between a faculty member exercising professional judgment about a piece of work and an institution making a formal finding of misconduct. -Chronicle of Higher Ed
The following strategies can help you maintain a healthy balance between your expertise and AI assistance:
Generate rough drafts from notes, rather than from a blank page: It’s fine to generate drafts with AI, but do your thinking first, put together some structured notes, and treat AI-generated content as a first draft that requires critical review and substantial editing. This approach can help mitigate the risk of anchoring bias.
Rotate between AI-assisted and non-assisted writing: To develop and maintain your own writing skills, interweave AI tools into your writing workflow, rather than relying on them for chunks of text. This will also help you maintain your own voice.
Customize AI prompts: Learn to craft specific prompts that guide the AI to produce more relevant and useful outputs for your particular needs.
Ethical considerations: Be transparent about AI use, especially in academic writing, and follow any guidelines or policies set by your institution or publication venues.
Fact-check and verify: Always verify facts, citations and specific claims made by AI. These tools have a tendency to generate “hallucinations,” plausible-sounding but inaccurate chunks of information.
From The Transmitter
If I were to wish for anything, I should not wish for wealth and power, but for the passionate sense of the potential. And what wine is so sparkling.. what so fragrant.. what so intoxicating.. as possibility! -Soren Kierkegaard, born May 5, 1813
OpenAI’s new image tool particularly excels at creating fake screenshots. Need to fabricate confirmation of wire transfer from Chase? A Wells Fargo alert for unusual account activity? A receipt for an Uber ride? Done, done, and done. These images could supercharge all kinds of commonplace scams. - The Atlantic
Ethical Debt - Ethical debt is the result of not considering societal harms and unintended consequences of new technology during development. It means putting off ethical considerations until later, so it becomes a debt to pay later. This can easily happen in the fast-moving production of AI tools. The people who incur it are rarely the people who ultimately pay for it. Similarly, "technical debt" is a software development term referring to the cost of choosing fast solutions now and putting off fixing issues until a future time.
Not a grand performance but an act of love.
What: This session isn't about tools. It’s about fixing the decision layer that comes first. You’ll learn how to move from scattered, individual use to a more coordinated, human-led approach grounded in your mission, values, and your team’s real capacity.
Who: Ryann Miller, Founder of Spark & Signal.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: TechSoup
What: This presentation addresses the practical gap between organisational expectations and the technical implementation of explainable AI (XAI). Through two real-world use case scenarios, credit scoring and employee attrition prediction, we demonstrate how state-of-the-art XAI techniques, including SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations), can be integrated into organisational processes to meet compliance and ethical demands.
Who: Marcus Becker, Assoc. Prof., Digital Transformation & Innovation Management, Management Center Innsbruck; Ana Moya, Lead, WAN-IFRA Data Science Expert Group.
When: 10 am, Eastern
Where: Zoom
Cost: Free
Sponsor: World Association of News Publisher
What: This webinar is focused on how journalists can build careers and thrive in business and industry-focused newsrooms. Our panelists will share insights on how their teams operate and what they look for in job applicants and potential colleagues.
Who: Paul F. Albergo, a journalism educator at American University; Maya Earls, deputy team lead for the Environment and Energy team at Bloomberg Law; Thai Phi Le, senior managing editor at Informa TechTarget.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: National Press Club
What: Vector embeddings transformed how we build search and retrieval systems, and if you’ve shipped production applications on top of them, you already know what they can do—and may also be starting to discover what they can’t. Vectors are powerful, but they represent a single point in space, while complex search problems involving multiple signals, multimodal data, or nuanced relevance ranking require something more expressive. Tensors extend what’s possible, enabling richer representations, more sophisticated scoring, and retrieval that can reason across dimensions that vector search simply wasn’t built to handle.
Who: Vespa.ai’s Bonnie Chase, Director of Product Marketing; Zohar Nissare-Houssen, Strategic Presales Lead Engineer.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: The New Stack
What: Join us for a practical OpenAI Academy session on how to identify, scope, build, test, and scale your first workspace agent for a team workflow. We’ll start with the basics: what agents are, how they work, and how they differ from other ways of using ChatGPT. Then we’ll walk through how to identify a strong workflow, write an “Agent Requirements Doc,” build a first version with tools, skills, and triggers, test and improve the agent, and roll it out safely with permissions, approvals, and feedback loops.
Who: Juliann Igo, GTM, OpenAI.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
More Info
What: You will contribute anonymously to a series of prompts to learn actionable insights for reassessing and repairing your relationships with work. Created specifically for those working within news organizations, this session will help journalists.
Who: Sam Ragland, API’s senior vice president.
When: 1 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: American Press Institute
What: In this session you will learn to: Understand what AI agents are and see a live demonstration of building one; Explore an example agent she has created, nicknamed; NewsBot Identify realistic ways agents could streamline your reporting and reduce repetitive tasks.
Who: Parvathi Subbiah, Tech Lead, AI Lab at The Economist.
When: 7:30 am
Where: Zoom
Cost: Member: £15; Nonmember: £25
Sponsor: Women in Journalism
What: In this tactical Mini Lab, you’ll see how school communicators can use Canva’s AI features to create social graphics, animated posts, and scroll-stopping videos that support enrollment, recruitment, and everyday district storytelling, while maintaining brand consistency and trust.
Who: Kate Crowder, Communications Coordinator, Germantown Municipal School District (Tenn.)
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: National School of Public Relations
What: Learn how structured, conversational AI–guided reflection generates continuous, actionable insight into student learning and persistence, without adding new reporting burdens.
Who: Rebecca Thomas Pathways, ePortfolio Director and Associate Teaching Professor of Electrical & Computer Engineering, Bucknell University; Jeffery Yan, Cofounder & CEO, Digication.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: American Association of Colleges and Universities
What: This webinar features a panel of campus leaders discussing how institutions are using AI and other technology to strengthen student communications and keep humans in the loop. We’ll also dig into findings from The Chronicle’s national survey of administrators and faculty on AI for student communications, including perceptions of virtual assistants and why some are funding their own AI tools.
Who: Ian Wilhelm, Deputy Managing Editor, The Chronicle of Higher Education.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: The Chronicle of Higher Education
What: This is a beginner-friendly session on using Codex for real work and everyday tasks. We’ll explain what Codex can help you do in everyday work, and how to start with work you can review, build on, and trust.
Who: Diana Stegal, Customer Education, OpenAI; Charmaine Pek, AI Deployment, OpenAI; Kelsey Pedersen, Codex, OpenAI.
When: 2 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: OpenAI Academy
What: Explore the Science Reporting Navigator to incorporate scientific evidence, perspectives or context into your work, even when on deadline. In this hour-long workshop, participants will spend half an hour learning how to use the Science Reporting Navigator as a reporting tool and half an hour workshopping ideas and stories to turn into successful pitches.
When: 3 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: New England Newspaper & Press Association
What: Experts from Smith School and an industry leader explore the pros and the cons of this revolutionary change.
Who: Balaji Padmanabhan, Associate Dean for Strategic Initiatives and Director of the Center for Artificial Intelligence in Business, University of Maryland; Eaman Jahani, Assistant Professor, University of Maryland; Robert H. Smith School of Business; Robyn Tomlin, Executive Director, American Press Institute.
When: 12 pm, Eastern
Where: Zoom
Cost: Free
Sponsor: University of Maryland
No one can live without delight and that is why a man deprived of spiritual joy goes over to carnal pleasures -Thomas Aquinas
One does not discover new lands without consenting to lose sight of the shore for a very long time. -Andre Gide
What Happens if Trump Seizes AI Companies – The Atlantic
So, About That AI Bubble Thanks to the rise of Claude Code and other AI agents, revenues are finally catching up to the hype. – The Atlantic
A.I. Spending Sets a Record, With No End in Sight – New York Times
AI Has Made Memory Chips One of the World’s Most Profitable Products – Wall Street Journal
Google Signs A.I. Deal with the Pentagon – New York Times
Google workers petition CEO to refuse classified AI work with Pentagon – Washington Post
U.S. OpenAI Sued by Seven Families Over Mass Shooting Suspect’s ChatGPT Use – Wall Street Journal
DeepMind’s David Silver just raised $1.1B to build an AI that learns without human data – Tech Crunch
Elon Musk and OpenAI CEO Sam Altman head to court in high-stakes showdown over AI – Associated Press
The Podcast Where You Can Eavesdrop on the A.I. Elite – New York Times
The AI Splurge Is Costing Big Tech Its Workforce – Wall Street Journal
DeepSeek’s Sequel Set to Extend China’s Reach in Open-Source A.I. – New York Times
Florida's attorney general announces criminal investigation into OpenAI over shooting – NBC News
Beijing tightens its grip on AI firms that try to shed their Chinese ties – Washington Post
A.I. Start-Ups From Canada and Germany Merge to Take On Silicon Valley – New York Times
Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era – Wall Street Journal
Microsoft wants to build the infrastructure behind the AI internet – Axios
An Investor Dared Him to Quit School. Now He’s Building a $1.5 Billion AI Startup. – Wall Street Journal
Why AI companies want you to be afraid of them – BBC
The Billionaire Math Geek Who Turned AI Into a Money-Printing Machine – Wall Street Journal
The returns on standalone AI literacy without domain depth are heading to zero. What the economy will actually reward is deep domain expertise with AI embedded in industrial context. A financial analyst building AI-driven models needs to understand finance first. A biotech researcher using AI for drug discovery needs to understand biology first. The hard skills underneath the AI layer, mathematical reasoning, scientific literacy, domain knowledge, take years to develop and will hold their value. -Sofia Fenichell
GPT (Generative Pre-trained Transformer) – GPT refers to a LLM (large language model) that first goes through an unsupervised period (no data labeling by humans) followed by a supervised "fine-tuning" phase (some labeling). G is for Generative indicating it will generate new, original text or content. P is for Pretrained referencing the training period when a model will learn patterns and structures in the data it is given. T stands for Transformer, which is the core AI architecture that makes predictions about the output.
No amount of regret changes the past. No amount of anxiety changes the future. Any amount of gratitude changes the present. -Ann Voskamp
Red Teaming - Testing an AI by trying to force it to act in unintended or undesirable ways, thus uncovering potential harms. The term comes from a military practice of taking on the role of an attacker to devise strategies.
Art resists rules and quantification. No objective measurement exists to prove whether the poetry of Pablo Neruda is better than Gabriela Mistral’s. Novice writers learn conventions; great writers invent them. An LLM trained to imitate taste can go only so far. - Jasmine Sun writing in The Atlantic
The nice part about wearing a smile is that one size fits all.
Causal Inference Is Different in Business
16 Ways to make a Small Language Model think bigger
15 Best Certifications for Data Analysts
6 Things I Learned Building LLMs From Scratch That No Tutorial Teaches You
AI Agents Need Their Own Desk, and Git Worktrees Give Them One
Beyond retrieval and prompting: RAG needs context engineering
'Jagged Intelligence': The Illusion Of Reasoning In Modern LLMs
Building the foundation for running extra-large language models
Understanding and Fixing Model Drift
Advanced RAG Retrieval: Cross-Encoders & Reranking
How Does AI Learn to See in 3D and Understand Space?
Context Engineering for AI Agents: A Deep Dive
Data, not infrastructure, must drive your AI strategy
Benchmark Best 30 AI Governance Tools in 2026
Beyond Code Generation: AI for the Full Data Science Workflow
Predictive Analytics - This method of speculating about future events uses past data to make recommendations. Researchers create complex mathematical algorithms in an effort to discover patterns in the data. One doesn't know in advance what data is important. The statistical models created by predictive analytics are designed to discover which of the pieces of data will predict the desired outcome. While correlation is not causation, a cause-and-effect relationship is not needed in order to make predictions. This process is ideal for anticipating, for instance, what a user is most likely to be interested in based on past behavior and user characteristics. However, after gathering this data, data scientists often turn to causal AI to gauge its impact on user behavior. Some people use the terms “predictive analytics” and “predictive AI” interchangeably, while others treat “predictive analytics” as a broader term that includes non-AI methods such as statistical modeling and regression analysis. While predictive analytics focuses on forecasting future outcomes, generative AI focuses on creating new content. This makes predictive analytics useful for applications such as financial forecasting and health diagnosis, while generative AI is an application for content creation, art and design.
Becoming is a service of Goforth Solutions, LLC / Copyright ©2026 All Rights Reserved