AI definitions: Ethical Debt

Ethical Debt - Ethical debt is the result of not considering societal harms and unintended consequences of new technology during development. It means putting off ethical considerations until later, so it becomes a debt to pay later. This can easily happen in the fast-moving production of AI tools. The people who incur it are rarely the people who ultimately pay for it. Similarly, "technical debt" is a software development term referring to the cost of choosing fast solutions now and putting off fixing issues until a future time.

More AI definitions

13 Webinars this week about AI, Journalism & Media

Mon, May 4 - Strategic AI for Nonprofit Leaders

What: This session isn't about tools. It’s about fixing the decision layer that comes first. You’ll learn how to move from scattered, individual use to a more coordinated, human-led approach grounded in your mission, values, and your team’s real capacity.

Who: Ryann Miller, Founder of Spark & Signal.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: TechSoup

More Info

 

Tue, May 5 - Open Sesame: Opening the Algorithmic Black Box with Practical Explainability Use Cases

What: This presentation addresses the practical gap between organisational expectations and the technical implementation of explainable AI (XAI). Through two real-world use case scenarios, credit scoring and employee attrition prediction, we demonstrate how state-of-the-art XAI techniques, including SHAP (SHapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations), can be integrated into organisational processes to meet compliance and ethical demands.

Who: Marcus Becker, Assoc. Prof., Digital Transformation & Innovation Management, Management Center Innsbruck; Ana Moya, Lead, WAN-IFRA Data Science Expert Group.

When: 10 am, Eastern

Where: Zoom

Cost: Free

Sponsor: World Association of News Publisher

More Info

 

Tue, May 5 - Finding opportunities within business journalism and B2B publications

What: This webinar is focused on how journalists can build careers and thrive in business and industry-focused newsrooms. Our panelists will share insights on how their teams operate and what they look for in job applicants and potential colleagues. 

Who: Paul F. Albergo, a journalism educator at American University; Maya Earls, deputy team lead for the Environment and Energy team at Bloomberg Law; Thai Phi Le, senior managing editor at Informa TechTarget.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: National Press Club

More Info

 

Tue, May 5 - From Vectors to Tensors: Expanding the Possibilities of AI Search

What: Vector embeddings transformed how we build search and retrieval systems, and if you’ve shipped production applications on top of them, you already know what they can do—and may also be starting to discover what they can’t. Vectors are powerful, but they represent a single point in space, while complex search problems involving multiple signals, multimodal data, or nuanced relevance ranking require something more expressive. Tensors extend what’s possible, enabling richer representations, more sophisticated scoring, and retrieval that can reason across dimensions that vector search simply wasn’t built to handle.

Who: Vespa.ai’s Bonnie Chase, Director of Product Marketing; Zohar Nissare-Houssen, Strategic Presales Lead Engineer.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The New Stack

More Info

 

Tue, May 5 - Skill Lab: Build Your First Workspace Agent

What: Join us for a practical OpenAI Academy session on how to identify, scope, build, test, and scale your first workspace agent for a team workflow. We’ll start with the basics: what agents are, how they work, and how they differ from other ways of using ChatGPT. Then we’ll walk through how to identify a strong workflow, write an “Agent Requirements Doc,” build a first version with tools, skills, and triggers, test and improve the agent, and roll it out safely with permissions, approvals, and feedback loops.

Who: Juliann Igo, GTM, OpenAI.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenAI Academy

More Info

 

Tue, May 5 - Beyond stress: What journalists should know about burnout

What: You will contribute anonymously to a series of prompts to learn actionable insights for reassessing and repairing your relationships with work. Created specifically for those working within news organizations, this session will help journalists.

Who: Sam Ragland, API’s senior vice president.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: American Press Institute

More Info

 

Wed, May 6 - Advanced AI Course: Building AI agents to make you a better journalist

What: In this session you will learn to: Understand what AI agents are and see a live demonstration of building one; Explore an example agent she has created, nicknamed; NewsBot Identify realistic ways agents could streamline your reporting and reduce repetitive tasks.

Who: Parvathi Subbiah, Tech Lead, AI Lab at The Economist.

When: 7:30 am

Where: Zoom

Cost: Member: £15; Nonmember: £25

Sponsor: Women in Journalism

More Info

 

Wed, May 6 - Canva + AI

What: In this tactical Mini Lab, you’ll see how school communicators can use Canva’s AI features to create social graphics, animated posts, and scroll-stopping videos that support enrollment, recruitment, and everyday district storytelling, while maintaining brand consistency and trust.

Who: Kate Crowder, Communications Coordinator, Germantown Municipal School District (Tenn.)

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: National School of Public Relations

More Info

 

Wed, May 6 - The Missing Layer How Conversational AI Turns Student Reflection into Institutional Intelligence

What: Learn how structured, conversational AI–guided reflection generates continuous, actionable insight into student learning and persistence, without adding new reporting burdens.

Who: Rebecca Thomas Pathways, ePortfolio Director and Associate Teaching Professor of Electrical & Computer Engineering, Bucknell University; Jeffery Yan, Cofounder & CEO, Digication.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: American Association of Colleges and Universities

More Info

 

Wed, May 6 - How AI Can Improve Colleges’ Communications with Students

What: This webinar features a panel of campus leaders discussing how institutions are using AI and other technology to strengthen student communications and keep humans in the loop.  We’ll also dig into findings from The Chronicle’s national survey of administrators and faculty on AI for student communications, including perceptions of virtual assistants and why some are funding their own AI tools.

Who: Ian Wilhelm, Deputy Managing Editor, The Chronicle of Higher Education.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: The Chronicle of Higher Education

More Info

 

Thu, May 7 - Codex for everyday work: Take ambitious ideas from start to finish

What: This is a beginner-friendly session on using Codex for real work and everyday tasks. We’ll explain what Codex can help you do in everyday work, and how to start with work you can review, build on, and trust. 

Who: Diana Stegal, Customer Education, OpenAI; Charmaine Pek, AI Deployment, OpenAI; Kelsey Pedersen, Codex, OpenAI.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenAI Academy

More Info

 

Thu, May 7 - Journalist Workshop: Incorporating science into every story

What: Explore the Science Reporting Navigator to incorporate scientific evidence, perspectives or context into your work, even when on deadline. In this hour-long workshop, participants will spend half an hour learning how to use the Science Reporting Navigator as a reporting tool and half an hour workshopping ideas and stories to turn into successful pitches. 

When: 3 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: New England Newspaper & Press Association

More Info

 

Fri, May 8 - AI-powered personalization and its risks

What: Experts from Smith School and an industry leader explore the pros and the cons of this revolutionary change.

Who: Balaji Padmanabhan, Associate Dean for Strategic Initiatives and Director of the Center for Artificial Intelligence in Business, University of Maryland; Eaman Jahani, Assistant Professor, University of Maryland; Robert H. Smith School of Business; Robyn Tomlin, Executive Director, American Press Institute.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: University of Maryland

More Info

20 Articles about the Business of Running an AI

What Happens if Trump Seizes AI Companies – The Atlantic

So, About That AI Bubble Thanks to the rise of Claude Code and other AI agents, revenues are finally catching up to the hype. – The Atlantic

A.I. Spending Sets a Record, With No End in Sight – New York Times 

AI Has Made Memory Chips One of the World’s Most Profitable Products – Wall Street Journal

Google Signs A.I. Deal with the Pentagon – New York Times

Google workers petition CEO to refuse classified AI work with Pentagon – Washington Post 

U.S. OpenAI Sued by Seven Families Over Mass Shooting Suspect’s ChatGPT Use – Wall Street Journal  

DeepMind’s David Silver just raised $1.1B to build an AI that learns without human data – Tech Crunch  

Elon Musk and OpenAI CEO Sam Altman head to court in high-stakes showdown over AI – Associated Press

The Podcast Where You Can Eavesdrop on the A.I. Elite – New York Times  

The AI Splurge Is Costing Big Tech Its Workforce – Wall Street Journal 

DeepSeek’s Sequel Set to Extend China’s Reach in Open-Source A.I. – New York Times

Florida's attorney general announces criminal investigation into OpenAI over shooting – NBC News

Beijing tightens its grip on AI firms that try to shed their Chinese ties – Washington Post

A.I. Start-Ups From Canada and Germany Merge to Take On Silicon Valley – New York Times

Anthropic’s Leaked Code Tests Copyright Challenges in A.I. Era – Wall Street Journal  

Microsoft wants to build the infrastructure behind the AI internet – Axios  

An Investor Dared Him to Quit School. Now He’s Building a $1.5 Billion AI Startup. – Wall Street Journal  

Why AI companies want you to be afraid of them – BBC

The Billionaire Math Geek Who Turned AI Into a Money-Printing Machine – Wall Street Journal

Standalone AI Literacy

The returns on standalone AI literacy without domain depth are heading to zero. What the economy will actually reward is deep domain expertise with AI embedded in industrial context. A financial analyst building AI-driven models needs to understand finance first. A biotech researcher using AI for drug discovery needs to understand biology first. The hard skills underneath the AI layer, mathematical reasoning, scientific literacy, domain knowledge, take years to develop and will hold their value. -Sofia Fenichell

AI definitions: GPT

GPT (Generative Pre-trained Transformer) – GPT refers to a LLM (large language model) that first goes through an unsupervised period (no data labeling by humans) followed by a supervised "fine-tuning" phase (some labeling). G is for Generative indicating it will generate new, original text or content. P is for Pretrained referencing the training period when a model will learn patterns and structures in the data it is given. T stands for Transformer, which is the core AI architecture that makes predictions about the output.

More AI definitions

AI definitions: Predictive Analytics

Predictive Analytics - This method of speculating about future events uses past data to make recommendations. Researchers create complex mathematical algorithms in an effort to discover patterns in the data. One doesn't know in advance what data is important. The statistical models created by predictive analytics are designed to discover which of the pieces of data will predict the desired outcome. While correlation is not causation, a cause-and-effect relationship is not needed in order to make predictions. This process is ideal for anticipating, for instance, what a user is most likely to be interested in based on past behavior and user characteristics. However, after gathering this data, data scientists often turn to causal AI to gauge its impact on user behavior. Some people use the terms “predictive analytics” and “predictive AI” interchangeably, while others treat “predictive analytics” as a broader term that includes non-AI methods such as statistical modeling and regression analysis. While predictive analytics focuses on forecasting future outcomes, generative AI focuses on creating new content. This makes predictive analytics useful for applications such as financial forecasting and health diagnosis, while generative AI is an application for content creation, art and design.

More AI definitions

AI ID's Anonymous Writing

An advanced AI model correctly identified a writer as the author of a 1,000-word scene from an unpublished novel. I tried Claude on the first chapter of a romance novel that I started almost 20 years ago. (It identified me after only) a few seconds.  I fed Claude a different opening chapter from an unpublished science fiction novel I started right before the pandemic. Claude needed only 1,132 words to identify the author. -Megan McArdle writing in The Washington Post

AI definition: Compression-meaning Tradeoff

Compression-meaning Tradeoff – The balance between reducing data size (compression) and preserving the original information (meaning). To manage information overload, humans group items into categories. For instance, we think of poodles and bulldogs as dogs. We balance this compression with details that set them apart: size, nose, tails, fur types, etc. On the other hand, LLMs attempt to maintain a balance between compressing information and preserving original meaning in different ways. LLMs use an aggressive compression approach, enabling them to store vast amounts of knowledge. However, it also contributes to unpredictability and failures. This tension has led many data scientists to conclude that better alignment with human cognition would result in more capable and reliable AI systems.

More AI definitions

Survey on AI Enthusiasm

Only 38% of U.S. respondents to an AI survey said “Yes, products and services using AI make me excited.” In comparison, 84% in China agreed with the statement. While over half the survey respondents said they trust their government to regulate AI responsibly, only 31% in the U.S. did — the lowest score in the study. Singapore had the highest score of 81%, with Indonesia scoring 76% and Malaysia scoring 73%. -Rest of World