“Social media is populist and polarizing; AI may be the opposite"

While different AI platforms behave in subtly different ways, all of them nudge people away from the most extreme positions and towards more moderate and expert-aligned stances. This remains true after accounting for partisan differences in AI platform usage and chatbots’ sycophantic tendencies. -John Burn-Murdoch, chief data reporter for the Financial Times

AI Definitions: Training Data

Training Data – A massive amount of text is initially fed into the system to train it. The AI uses this info to create a map of relationships, so it can make predictions. Giving the AI lots of data means more options, which can lead to more creative results. However, this can also make it more vulnerable to hackers and hallucinations. Using more curated, locked-down data sets makes AI models less vulnerable and more predictable but also less creative. 

More AI definitions

 32 Free Online AI Classes

AI Essentials – Google (through Coursera)

AI Fluency for Educators (Anthropic)

AI Fluency for nonprofits (Anthropic)

AI Fluency for Students (Anthropic)

AI Fluency: Framework & Foundations (Anthropic)

AI for Everyday Living: A Beginner Workshop for Older Adults (OpenAI Academy)

AI For Everyone (Coursera)

ChatGPT 101: The Complete Beginner's Guide and Masterclass (Udemy)

ChatGPT for Education 101 (OpenAI Academy)

ChatGPT for Education 102 (OpenAI Academy)

ChatGPT for Government 101 (OpenAI Academy)

ChatGPT for Government 102 (OpenAI Academy)

ChatGPT Foundations: Getting Started with AI (OpenAI Academy)

Claude 101 (Anthropic)

Claude Code in Action (Anthropic)

Coursera’s AI courses

Exploring ChatGPT in 2 hours: Practical Guide for Beginners (Udemy)

Generative AI for Data Analysts – IBM (through Coursera)

Generative AI for Data Scientists – Google (through Coursera)

Generative AI for Everyone (Coursera)

Generative AI with Large Language Models (Coursera)

Introduction to Claude Cowork (Anthropic)
Intro to Generative AI: A Beginner’s Primer on Core Concepts - Google (through Coursera)

Introduction to Generative AI (Google)

Learn how to use ChatGPT to Make Money! (Udemy)

Make Teaching Easier with Artificial Intelligence (Udemy)

Master Basics of ChatGPT & OpenAI API (Udemy)

Microsoft AI Product Manager – Microsoft (through Coursera)

Prompt Engineering for ChatGPT – Vanderbilt University (through Coursera)

Prompting with Purpose (OpenAI Academy)

Small Business Jam: Online AI Skill Lab (OpenAI Academy)

Teaching AI Fluency (Anthropic)

AI Definitions: Tokens

Tokens - Think of a token as the root of a word. “Creat” would be the “root” of words like create, creative, creator, creating, and creation. An LLM looks for correlations — words that go together like giraffe and neck. This group of words are represented by a token. A single word might fall into many tokens since the word might have multiple meanings and the subwords of this word will likely correlate to many other subwords. One token generally corresponds to ~4 characters of text for common English text. Examples

More AI definitions

The AI Niceness Overload

Stanford researchers say chatbots are overly agreeable when giving interpersonal advice, affirming users' behavior even when harmful or illegal. On top of that, users could not distinguish when an AI was acting overly agreeable. The study’s lead author worries that the sycophantic advice will worsen people’s social skills and ability to navigate uncomfortable situations. “AI makes it really easy to avoid friction with other people.” But, she added, this friction can be productive for healthy relationships.  More from Stanford

A Simple Explanation as to How AI (LLMs) Works

Building the AI 

Large Language Models (LLMs). Computer programs that do one thing: predict the next “token.”   

Training Data. A massive amount of text is initially fed into the system to train it.  

Parameters. The internals rules and limitations learned from the training data. 

Tokenization part 1: pre-training. The process of converting the raw training data (text, images, or audio) into small units called tokens. 

What happens When Someone uses the AI

Prompt. A user asks a question.

Tokenization part 2: inference. The process of converting the prompt (whether text, images, or audio) into small units called tokens. 

Embedding. The conversion of tokens into numbers (vectors) so the computer can look at their relationships. 

Vector databases. The storage and search engine for vector embeddings.  

RAG. The system searches the vector database relevant to the prompt to prevent hallucinations and provide updated information.

Transformers. The core AI architecture that uses vectors to make a prediction about which token to generate next for the prompt.  

13 Webinars this week about AI, Journalism & Media

Mon, Mar 30 - Responsible Journalism

What: Best practice when reporting on domestic abuse and sexual violence.

When: 9 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Welsh Women’s Aid

More Info

 

Tue, Mar 31 - Agile Infographics

What: Quickly and efficiently make professional infographics. To stay current, designers want to adopt “Agile methodologies.” This innovative/cutting-edge workshop shows (step-by-step) how to use Agile to turn words into professional, compelling infographics quickly. Learn the proven techniques and tools the pros use to do more with less. 

Who: Mike Parkinson Author, Owner, Billion Dollar Graphics.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Training Magazine

More Info

 

Tue, Mar 31 - The Future of Reporting: Navigating AI's Impact on Editorial

What: Our expert panel will break down how B2B media companies are responding to media landscape changes and share actionable strategies for journalists to thrive. We’ll discuss: Which tasks are easily automated to save you time; Where to lean on human strengths like deep connections and context; The impact of AI on source verification and research; The B2B media roadmap for an AI-integrated future.

Who: Brendan Howard, Freelance Podcast Host; Maria Korolov, Technology Journalist & Author; Alexis Gajewski, Associate Director of Newsroom Operations, Endeavor B2B; Priyanka Rao, Founder & CEO, AI Champions.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: American Society of Business Publication Editors

More Info

 

Tue, Mar 31 - The Panama Papers at 10: What Changed — and What Hasn’t

What: Ten years ago, the Panama Papers exposed the hidden offshore financial system used by politicians, billionaires and criminals around the world. Its impact continues to shape the fight against financial secrecy today.A conversation about how the investigation unfolded, the reforms it triggered and why the struggle for transparency is far from over.

Who: ICIJ Executive Director Gerard Ryle; international tax justice expert Tove Maria Ryding.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: International Consortium of Investigative Journalist

More Info

 

Tue, Mar 31 - Media Law & Press Freedom in the Current Administration

What: We will evaluate the angles of attack against journalists and news organizations and discuss how to exercise First Amendment rights in the face of hostility and near constant threats.

Who: Jeffrey Hermes, Deputy Director, Media Law Resource Center; George Freeman, Executive Director, Media Law Resource Center.

When: 6 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: National Association of Hispanic Journalists

More Info

 

Wed, April 1 - Advertising Transformation in Local Media: How Regional Publishers Are Rebuilding Revenue

What: How the publisher's advertiser relationships were tested in real time as they covered one of the most consequential local news stories in the country: the federal immigration enforcement operations that have drawn national attention.

Who: Brian Kennett, VP and head of digital advertising and agency services at the Minnesota Star Tribune; Dave Karabag, regional VP of advertising sales at the Orlando Sentinel.

When: 10 am, Eastern

Where: Zoom

Cost: Free to members

Sponsor: International News Media Association

More Info

 

Wed, April 1 - ChatGPT for Teachers 101  

What: This session will provide a practical walkthrough of the platform and show how teachers, school staff, and district administrators can begin using AI to support their day-to-day work.

Who: Kirk Gulezian, Education & Government, OpenAI.

When: 11 am, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenAI Academy

More Info

 

Wed, April 1 - Using workspace analytics to drive AI adoption

What: A hands-on session on how teams use Workspace Analytics in ChatGPT Enterprise to run stronger rollouts—finding where adoption is gaining traction, where it’s stalling, and what to do next. 

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenAI Academy

More Info

 

Wed, April 1 - 8 Ways to Grow Your Nonprofit’s Following on Social Media

What: The average growth rate for new social media followers ranges from .64% to 3% per month, depending upon the platform. In other words, the era of organic growth on social media is over. To grow your nonprofit’s following on social media, you need to make a concerted effort to let your supporters and donors know how to find your nonprofit on social media. This free 20-minute webinar will present eight ways to grow your nonprofit’s following on social media.

Who: Heather Mansfield, Founder of Nonprofit Tech for Good.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Nonprofit Tech

More Info

 

Wed, April 1 - Disability Guide Launch  

What: Built by Military Veterans in Journalism (MVJ) in collaboration with the Disabled Journalists Association (DJA), Fix the Frame: A Newsroom Guide to Disability Narratives is a practical resource designed to help journalists produce more accurate, respectful, and inclusive reporting on disability.

Who: Zack Baddorf (MVJ); Cara Reedy (DJA); Rebecca Cokley, Ford Foundation; Sam Kille; Beth Haller; Russell Midori (MVJ); Devon Lancia (MVJ).

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Military Veterans in Journalism

More Info

 

Thu, April 2 - How to Reinforce Training with AI and Coaching

What: We’ll explore how leaders can automate coaching prompts, personalize development pathways, and measure impact without losing the human touch. The combination of AI and coaching creates an ecosystem of intelligent reinforcement—keeping learners engaged long after training ends.

Who: Tim Hagen is the President of Progress Coaching.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: OpenSesame

More Info

 

Thu, April 2 - AI‑Driven Cyber Threats: What’s Real, What’s Hype, and How to Prepare

What: We will explore how AI is accelerating both cyberattacks and defenses, why SMBs and mid-market organizations are increasingly targeted, and how the gap between traditional security tools and AI-era threats continues to widen. 

Who: Justin Vredeveld, Business Development Manager; Jared Olson, Security Team Lead; Blake Mielke, Incident Response Lead.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Ontech Systems

More Info

 

Fri, April 3 - Using AI to Measure the Efficacy of Professional Learning 

What: How using AI to analyze large collections of data can shed light on the efficacy of professional learning.

Who: Lisa Schmucki, Founder and CEO of edWeb.net; Thor Prichard, President and CEO of Clarity Innovations.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: EdWeb.net

More Info

Reducing Loneliness: AI or Human?

Researchers from the University of British Columbia found that first-semester college students who texted a randomly selected fellow first-semester college student every day for two weeks experienced around a nine percent reduction in feelings of loneliness. The same two weeks of daily messaging with a Discord chatbot reduced loneliness by around two percent, which turned out to be the same amount as daily one-sentence journaling. -404 Media

A more meaningful study might be to teach the LLM to mimic a first semester student of the same economic and social background as the person who is part of the study.

The Place of AI-Generated Code

Computer programming is now becoming a conversation, a back-and-forth talk fest between software developers and their bots. Coding is perhaps the first form of very expensive industrialized human labor that A.I. can actually replace. A.I.-generated videos look janky, artificial photos surreal; law briefs can be riddled with career-ending howlers. But A.I.-generated code? If it passes its tests and works, it’s worth as much as what humans get paid $200,000 or more a year to compose. -New York Times

AI Definitions: Abstractive Summarization

Abstractive Summarization (ABS) – A natural language processing summary technique that generates new sentences not found in the source material. In contrast, extractive summarization sticks to the original text, identifying the important sections to produce a subset of sentences taken from the original text. Abstractive summarization is better when the meaning of the text is more important than exactness, while extractive summarization is better when sticking to the original language is critical. 

More AI definitions

The E-Nose

Scientists have been developing and refining a technology called the e-nose—which is exactly what it sounds like. These systems detect and distinguish aromas, sometimes with about 1,000 times as much precision as humans can. Researchers are exploring—or even commercializing—e-nose systems that can scan a person’s breath to detect deadly infections, sniff the air in a building to seek out signs of potential contaminants, or even develop perfumes more quickly and cheaply than before. - Wall Street Journal

The intersection of Science & AI in 15 Articles

Unresolved Copyright & AI Questions


In March 2026, the Supreme Court declined to hear Thaler v. Perlmutter.

This leaves in place the D.C. Circuit’s March 2025 ruling.

There was no ruling on the merits. This doesn’t set precedent.

Stephen Thaler listed his AI system as the sole author, disclaimed any human creative contribution, and asked for copyright protection anyway. The DC court said no.

The D.C. Circuit held that copyright law requires a human author, which did not disqualify using AI assistance.

Questions not resolved:

1.         How much Human involvement is enough? 

The US Copyright Office said in its “Zarya of the Dawn” comic book registration decision that the AI- images weren’t protectable, but the human-authored text and the selection and arrangement of text and images were. So far, the Office has said that prompts are not copyrightable—prompting is more like giving instructions to a commissioned artist than actually determining the expressive content of the final image. But what if dozens, even hundreds of prompts are entered? Wouldn’t that involve substantial human effort, iterative refinement, and a creative vision? The Copyright Office says getting different results from the same prompt is proof the user isn’t controlling the expression. The underlying question is this: Is prompting closer to authorship or closer to curation?

2.         Can you prove what you Contributed?

If your work incorporates more than a de minimis amount of AI-generated material, the Copyright Office requires a disclosure statement about the AI involvement and a description of your human contribution. This means the creator must keep files, prompts, drafts, notes on what was intended and layered edits—in case there is a need to prove exactly what the human contribution was. A copyright applicant can avoid this simply by not disclosing the AI use. The system, in effect, rewards silence.  

3.         What Happens When Uncopyrightable AI Output Gets Licensed Anyway?

AI-generated materials are already being licensed, bundled, and sold. An example: Someone took a Python library and used an AI coding agent to rewrite it, then changed the project’s license to a more permissive one. The original creator objected, saying the original license still applied.

4.         AI Output Can Absolutely Infringe. So Now What?

The SCOTUS denial also prompted a wave of commentary suggesting that AI-generated works now exist in some kind of copyright-free zone. They don’t. Issues still on the table: Whether AI-generated summaries of news articles are substitutive enough to infringe, and whether AI-generated narrative retellings of novels cross the line from ideas to expression. One judge dismissed claims that AI bullet-point summaries of investigative journalism were substantially similar to the originals. The same judge allowed a lawsuit to proceed because ChatGPT’s summary of a novel was might have captured the “overall tone and feel” of the original work.

Bottom line: Millions of people are using AI tools every day without knowing whether what they’re making is protectable, infringing, both, or neither. 

Thaler Is Dead. Now for the AI Copyright Questions That Actually Matter 

20 Recent Articles about AI & Academic Scholarship

Research integrity is locked into an arms race with agentic AI slop – LSE  

AI can help with research, but humans must remain accountable say university executives – Times Higher Ed 

Hallucinated citations produced by generative artificial intelligence may constitute research misconduct when citations function as data in scholarly papers – Taylor & Francis

AI tool flags plagiarism in 95% of Ph.D. theses submitted this year at India university. – Times of India 

How AI use in scholarly publishing threatens research integrity, lessens trust, and invites misinformation – Bulletin of the Atomic Scientists

Hallucinated References: Five Excuses for Academic Misconduct – Dorethea Baur

Ministers urged not to allow data mining of academic literature – Research Professional News

Librarian finds ‘preposterous number’ of fake references in paper from Springer Nature journal – Retraction Watch 

AI is inventing academic articles – and scholars are citing them – the Observer  

DataSeer develops AI system to track dataset reuse – Research Information  

Journal Submissions Riddled With AI-Created Fake Citations – Inside Higher Ed

Account for AI in the environmental footprint of scientific publishing – Nature  

Will AI Help or Hinder Scientific Publishing? – Undark

Hey ChatGPT, write me a fictional paper: these LLMs are willing to commit academic fraud. – Nature

Scientists are failing to disclose their use of AI despite journal mandates, finds study – Physics World

AI in the editorial workflow: Journals set the rules, institutions set the habits – Scholarly Futures  

AI is turning research into a scientific monoculture - Nature

What happens when reviewers receive AI feedback in their reviews? – ArXiv

Human versus artificial intelligence: investigating ability of young academics from research and non-research institutions to identify ChatGPT-generated dental research abstracts - Nature 

Fear of stigma blamed as 0.1 per cent of papers declare AI use - Times Higher Ed