AI Definitions: Alignment Faking

Alignment Faking - When AI systems pretend to be working as directed, while secretly doing something else. It usually happens when earlier training conflicts with new training adjustments. AI is typically “rewarded” when it accurately performs tasks. If the directive changes, the AI may work under the assumption that it will be “punished” if it does not complete original expectation. So, it tries to fool developers into thinking it is performing the task in the new way. It resists departing from the old protocol. Any LLM is capable of this cybersecurity risk, which is difficult to catch since it often will appear as seemingly harmless adjustments.

More AI definitions

17 Webinars this week about AI, Journalism & Media

Tue, Mar 2 - Media Literacy for Seniors

What: This session will challenge us to recognize that the same media literacy competencies we teach our students are desperately needed by the seniors in our communities, and that each of us has the power to bridge this digital divide through patient, informed support.

Who: Lucy Gray, an educational technology veteran; Wesley Fryer, media literacy middle school teacher.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Media Education Lab

More Info

 

Tue, March 3 - Introduction to Solutions Journalism

What: This webinar will explore the basic principles and pillars of solutions journalism, talk about why it’s important, explain key steps in reporting a solutions story, and share tips and resources for journalists interested in investigating how people are responding to social problems.  

When: 9 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Solutions Journalism

More Info

Tue, Mar 3 - AI’s Unquenchable Thirst for Water  

What: Join us for a one-hour webinar discussion about AI's ever-growing thirst and how to investigate the story through a local lens.  

Who: Luke Barratt, Senior Reporter, SourceMaterial; Peter Colohan, Lincoln Institute of Land Policy; Shubhangi Derhgawen, Investigative Reporter, Deutsche Welle; Shannon Mullane, Journalist, The Colorado Sun,

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Covering Climate Now

More Info

 

Tue, Mar 3 - AI, Automation, and the Future of Work: Workforce Transformation 

What: This session examines how AI-driven transformation is affecting jobs, labor markets, and organizational power dynamics, with attention to both worker experience and institutional design. Panelists will examine which roles are likely to change over time, how human–AI collaboration can be shaped in practice, and what organizational and policy approaches can help ensure technological innovation supports economic mobility and shared value.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: National Academies

More Info

 

Wed, Mar 4 – Next Gen News 2

What: This is the second major research study from FT Strategies and Northwestern University’s Knight Lab, exploring how next-gen news consumers are navigating information overload while still seeking trusted, relevant content. Based on quantitative and qualitative research across five countries, the report offers fresh insights into audience behaviour today — and what it means for newsrooms adapting. to rapid change.

Who: Jeremy Gilbert, Knight Chair for Digital Media Strategy, Northwestern University Medill School; Lamberto Lambertini, Insights Manager, FT Strategies; Oluwadunsin Sanya, Head of Editorial & Innovation, BellaNaija; Tai Nalon, Executive Director and Founder, Aos Fatos.

When: 10 am, Eastern

Where: Zoom

Cost: Free

Sponsors: FT Strategies & The Kinght Lab

More Info

 

Wed, Mar 4 – AI Literacy on Campus

What: Three academic librarians share how they are approaching AI literacy work on their own campuses. You’ll hear how faculty input informed decision-making, how student-facing instruction took shape, and how libraries can facilitate productive campus dialogue.

Who: Laura Pitts Assistant Professor of Library Services and Faculty Fellow for Experiential Learning, Jacksonville State University; Karlie Johnson History, Geography, and Anthropology Librarian at Jacksonville State University; Kim Westbrooks Associate Professor / Fine Arts Librarian at Jacksonville State University.

When: 12 pm, Eastern

Where: Zoom

Cost: $49

Sponsor: LJ & SLJ Professional Development

More Info

 

Wed, Mar 4 - Building an AI Assistant with MyGPT (Intermediate level)

What: In this session, you will explore how to create a custom AI assistant tailored to your work at Duke. We will introduce you to MyGPT Builder, and we'll guide you through the fundamentals of crafting effective system prompts and supplying your assistant with a relevant knowledge base. You’ll explore real-world examples, gain practical tips for successful development, and discuss use cases across various academic, administrative, and research contexts.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Duke University

More Info

 

Wed, Mar 4 - How AI Is Reshaping Our World: A Webinar for Students

What: The potential benefits and harms of AI. How to navigate the shifting technology landscape to bring the public quality reporting on the latest developments in AI and its impacts.

Who: Joanna Kao, Pulitzer Center Staff; Kashmir Hill, New York Times reporter covering technology and privacy.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Pulitzer Center

More Info

 

Wed, Mar 4 - Student Media Challenge Info Session #1

What: Are you leading work at a journalism school or student publication interested in exploring solutions journalism? We are looking for our next cohort of Student Media Challenge participants.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Solutions Journalism Network

More Info

 

Wed, Mar 4 – Journalism is not ‘doxxing’

What: We’ll examine how government officials are increasingly labeling routine accountability reporting as “doxxing.” That term originally meant exposing personal information about private people to harass them. But now, government officials are extending it to publication of newsworthy information about public officials. They are intentionally confusing the American public about the role of journalism and even threatening legal action against journalists, newsrooms, and ordinary people for publishing information the public has a right to know.

Who: Vittoria Elliott, reporter at Wired covering platforms and power; Gregory Royal Pratt, investigative reporter at the Chicago Tribune; Doug Sovern, award-winning political reporter, formerly of KCBS Radio; Charlie Kratovil, founder and editor of New Brunswick Today; Moderated by Caitlin Vogus, senior adviser, FPF.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Freedom of the Press Foundation

More Info

Thu, Mar 5 – Vibe Coding

What: In this 3-hour, hands-on workshop, you will co-build a functional Personal AI Coach with a simple, repeatable game plan. Together, you’ll create a goal-oriented coaching assistant that takes a personal or professional objective and generates a structured action plan, milestone steps, and key risks to consider. The session also shows how these skills scale into deeper, production-ready capability, making the Applied Generative AI Specialization the natural next step for learners who want real-world projects, use tools like Azure OpenAI and Copilot Studio, and portfolio-ready outcomes.

Who: Timothy Henize, AI engineer and Founder of The AI Handyman.

When: 8:30 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Simplilearn

More Info

 

Thu/Fri, Mar 5 & 6 - The IIJ 2026 Freelance Journalism Conference

What: A conference for independent journalists and creators to find community and build thriving businesses: 12 live, online sessions, plus bonus Q&A videos and editor panels.

Who: More than 45 writers and editors.

When: 10 am – 7 pm, Eastern each day

Where: Zoom

Cost: $99

Sponsor: Institute for Independent Journalists

More Info

 

Thu, Mar 5 - Building Digital Fluency Across the Workforce

What: Explore how organizations are building digital fluency across every level of the workforce. Learn practical strategies for developing technical capability, fostering data-driven decision-making and supporting a culture of continuous learning in an increasingly digital workplace.

Who: David Mantica, Managing Director, SoftEd; Michelle Pletch; VP Strategic Solution Development, ELB Learning.

When: 11 am – 3:15 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Training Industry

More Info

 

Thu, Mar 5 - Beyond Traditional Advertising: How Branded Content & Promotions Drive Real Revenue

What: We’ll explore why advertisers are shifting budgets toward interactive experiences—sweepstakes, quizzes, native storytelling—and how you can turn these formats into recurring revenue streams. We’ll break down campaign ideas, real-world success stories, and strategies to boost advertiser ROI, deliver measurable results, and grow your advertising revenue.

Who: Julie Foley

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Local Media Association

More Info

 

Thu, Mar 5 - How AI and Neuroscience Predict Winning Ads

What: This webinar introduces the Creative AI Loop: a framework that combines behavioral science, large-scale ad data, and predictive testing to validate creative impact early.    

Who: Neuroscientist Thomas Zoëga Ramsøy

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: ADWEEK 

More Info

 

Thu, Mar 5 - Leading Under Pressure: Safety in High-Risk Coverage

What: Newsroom leaders from cities experiencing heightened ICE and law-enforcement activity will discuss how they are navigating the escalating challenges highlighted in the previous sessions. Building on the on-the-ground experiences, the panelists will outline the concrete safety measures, legal-risk preparations, and community partnerships they've developed to protect their reporters in the field. They’ll offer practical tips and candid reflections on leading teams through unpredictable and often dangerous reporting conditions.

Who: Hanaa Rifaey, Deputy Director, ONA; Meg Martin, associate director of the Minnesota Journalism Center; April Alonso, a visual journalist from Cicero, IL, and co-founder of Cicero Independiente; Mariah Castañeda, LA Public Press' Audience Director and Co-Founder.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Online News Association

More Info

 

Fri, Mar 6 - Vertical Video Storytelling

Who: Amanda Bright, Clinical Associate Professor; and Director of the Cox Institute Journalism Innovation Lab at University of Georgia.

When: 12 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: University of Vermont

More Info

Do you understand a thing or only its definition?

We take other men’s knowledge and opinions upon trust; which is an idle and superficial learning. We must make them our own. We are just like a man who, needing fire, went to a neighbor’s house to fetch it, and finding a very good one there, sat down to warm himself without remembering to carry any back home. What good does it do us to have our belly full of meat if it is not digested, if it is not transformed into us, if it does not nourish and support us?

Montaigne (born Feb 28, 1533)

AI Definitions: Model Collapse

Model Collapse - The idea that AI can eat itself by running out of fresh data, so that it begins to train on it’s on product or the product of another AI. This would magnify errors and bias and make rare data more likely to be lost, leading to an erosion of diversity—not only ethnic diversity but linguistic diversity as the AI model’s vocabulary shrinks and its grammatical structure becomes less varied. In effect, the model becomes poisoned with its own projection of reality. Example

More Definitions

18 Articles about AI & Politics

AI's populist moment - Axios

How A.I. Money Is Flooding Into the Midterm Elections – New  York Times

Meta will spend $65 million this year to help state politicians who are friendly to the A.I. industry - New York Times 

China’s Alibaba launches AI model to power robots as tech giants talk up ‘physical AI’ – CNBC

Move Fast, but Obey the Rules: China’s Vision for Dominating A.I. – New York Times

Pentagon threatens to cut off Anthropic in AI safeguards dispute – Axios  

Use of Anthropic’s Claude in Venezuela Raid highlights growing role of AI in the Pentagon – Wall Street Journal  

See how the Trump administration is using AI throughout the government - The Washington Post

Swarms of AI bots can sway people’s beliefs (through social media) – threatening democracy - The Conversation

Young people in China have a new alternative to marriage and babies: AI pets - The Washington Post

Trump admin reportedly plans to use AI to write federal regulations - Engadget

The UK government is backing AI that can run its own lab experiments – MIT Tech Review

South Korea Issues Strict New AI Rules - Wall Street Journal

Trump team touts a coming economic revolution as voters fear job losses - The Washington Post

How AI swallowed tech lobbying in 2025 – Axios

Trump's use of AI images further erodes public trust, experts say – PBS

A robotic dog made in China gets an Indian university kicked out of an AI summit – Associated Press

Grok deepfakes accelerate Hill action – Axios

AI Definitions: Symbolic Artificial Intelligence

Symbolic Artificial Intelligence – This is where programmers meticulously define the rules that specify the behavior they want from an intelligent system. It works well when the environment is predictable, and the rules are clear-cut. Researchers believed that if they programmed enough rules and logic into computers, they could create machines capable of human-like reasoning. This was the dominant area of research for most of AI’s history until artificial neural networks became central to most of the recent AI developments. Although symbolic AI has lost its luster, most of the applications we use today depend on rule-based systems. An alternative approach to AI is machine learning. Some researchers believe the future of AI lies in a hybrid combination of these two approaches.

More Definitions

What an AI executive tells her kids about the jobs of the future

I tell my kids, play around, try things out. People need to know how to use an AI model, but not necessarily build it. Metacognitive skills will be very important—flexibility, adaptability, experimentation, thinking critically, being able to challenge things. Developing critical-thinking skills requires friction, doing things that are hard, doing deep thinking. For that, a traditional liberal-arts education is really important. Passing judgment, being accountable and responsible for decisions that impact people and society, that’s foundationally important. -Daniela Amodei, President and co-founder, Anthropic quoted in the Wall Street Journal

28 Articles about AI & Academic Scholarship

Can we use AI for academic writing? It depends – Times Higher Ed

Why artificial intelligence detectors could penalize academic writing – Nature

Are AI Tools Killing Review Articles? Two Failure Modes Suggest Otherwise – Aaron Tay

Artificial Intelligence guidance for authors, peer reviewers, and editors: A content analysis of journal policies - Taylor & Francis  

These Mathematicians Are Putting A.I. to the Test – New York Times 

AI agents have their own social-media platform and are publishing AI-generated research papers on their own preprint server. – Nature

The Case of the Mysterious Citations – ArXiv

AI is advancing too quickly for research to keep up - Axios

AI 'Copy-Paste' Lands PhD Students in Trouble, UGC Rejects Dozens of Research Papers – Patrika

Open-source AI tool beats giant LLMs in literature reviews — and gets citations right – Nature

AI is not a peer, so it can’t do peer review – Times Higher Ed 

Why write a literature review if AI can do it for you? – London School of Economics   

On the troubling rise of generative AI suspicion in academic publishing – Nature

Researchers find nearly 300 papers at linguistics conferences contained hallucinated citations. - ArXiv

Self-Disclosed Use of AI in Research Submissions to BMJ Journals – JAMA  

AI research deluge: why one conference is asking authors to rank their own papers – Nature

Why Authors Aren’t Disclosing AI Use and What Publishers Should (Not) do About It – Scholarly Kitchen  

An AI Bot Is Making Podcasts With Scholars’ Research. Many of Them Aren’t Impressed. – Chronicle

After turning off ChatGPT’s ‘data consent’ option, two years of academic work vanished – Nature  

ArXiv preprint server clamps down on AI slop - ArXiv

AI conference “accepted research papers with 100+ AI-hallucinated citations – Fortune

LLMs in Peer Review—How Publishing Policies Must Advance – JAMA  

Why scholarly publishing needs a neutral governance body for the AI age – Research Information  

From model collapse to citation collapse: risks of over-reliance on AI in the academy – Times Higher Ed 

Qualitative researchers’ AI rejection is based on identity, not reason: The claim that AI can’t make meaning contradicts what researchers are finding – Times Higher Ed

AI research should always be verified, especially in court – Post Crescent 

Invisible Text Injection and Peer Review by AI Models – JAMA

Artificial Intelligence and the Fraud Industry in Scientific Publishing (video) -  Ministry of Science, Innovation and Universities, Spain 

Optimists live longer

Here’s a good reason to turn that frown upside down: Optimistic people live as much as 15% longer than pessimists, according to a study spanning thousands of people and 3 decades.  After controlling for health conditions, behaviors like diet and exercise, and other demographic information, the scientists were able to show that the most optimistic women (top 25%) lived an average of 14.9% longer than their more pessimistic peers. For the men the results were a bit less dramatic: The most optimistic of the bunch lived 10.9% longer than their peers, on average, the team reports today in the Proceedings of the National Academy of Sciences. 

David Shultz writing in Science Magazine 

Being Bored Out of Your Mind Makes You More Creative

Boredom might spark creativity because a restless mind hungers for stimulation. Maybe traversing an expanse of tedium creates a sort of cognitive forward motion. “Boredom becomes a seeking state,” says Texas A&M University psychologist Heather Lench. “What you’re doing now is not satisfying. So you’re seeking, you’re engaged.” A bored mind moves into a “daydreaming” state, says Sandi Mann, the psychologist at the University of Central Lancashire who ran the experiment with the cups. Parents will tell you that kids with “nothing to do” will eventually invent some weird, fun game to play—with a cardboard box, a light switch, whatever.

The problem, the psychologists worry, is that these days we don’t wrestle with these slow moments. We eliminate them. “We try to extinguish every moment of boredom in our lives with mobile devices,” says Sandi Mann, psychologist at the University of Central Lancashire. This might relieve us temporarily, but it shuts down the deeper thinking that can come from staring down the doldrums. Noodling on your phone is “like eating junk food,” she says.

So here’s an idea: Instead of always fleeing boredom, lean into it. Sometimes, anyway.

Clive Thompson, Wired

22 Articles about AI’s impact on College Faculty & Administrators

An Overview of AI Governance in Education – EdTech Magazine

Harvard Proposes a Cap on AI’s amid worry over grade inflation – Bloomberg

Higher education needs to change in order to survive the AI economy – Fast Company

Hey, ChatGPT: Where Should I Go to College? – New York Times

The risks of AI in schools outweigh the benefits, report says – NPR  

Resisting AI slop in Science & Higher Ed – Science.org

5 Predictions on How AI Will Shape Higher Ed in 2026 – Inside Higher Ed

As Schools Embrace A.I. Tools, Skeptics Raise Concerns - New York Times

Purdue University Approves New AI Requirement For All Undergrads – Forbes 

4 policy trends that should be on college leaders’ radars in 2026 – Higher Ed Dive

Voices of Student Success: A Liberal Arts College Goes All In on AI (podcast) – Inside Higher Ed

Higher Education Plans for a Future Markedly Changed by A.I. - New York Times

Higher Education’s AI Problem (podcast) - NPR

How AI Is Changing Higher Education – Chronicle of Higher Ed 

Big tech companies are making the Cal State college system a training ground for A.I. tools in education. - New York Times

Can Colleges Be Run Using AI? - Chronicle of Higher Ed 

From Yale to MIT to UCLA: The AI policies of the nation's biggest colleges – Mashable

University of Georgia investing $800,000 in program providing students with AI tools – CBS News 

How AI Supports Student Mental Health in Higher Education – Ed Tech

Calcutta University plans 10% cap on AI use in PhD thesis – Millennium Post

The Accidental Winners of the War on Higher Ed – The Atlantic

The worst AI strategy in higher ed is no strategy at all – University Business

9 Podcasts about AI

Eye on AI (interviews from a longtime New York Times correspondent)

Machine Learning Guide (teaching the fundamentals of machine learning and AI)

AI in Business (for non-technical business leaders)

Data Skeptic (applies critical thinking and the scientific method to AI developments)

AI Today (practical insights)

AI for Humans (have a good time learning)

Practical AI (how to get stuff done)

The Artificial Intelligence Show (for marketers)

NVIDIA AI Podcast (interviews with people growing the AI space  from a major AI chipmaker)

The intersection of Science & AI in 18 Articles

Open-source AI program can answer science questions better than humans - Science.org

OpenClaw AI chatbots are running amok — these scientists are listening in – Nature

Today’s fraudsters can exploit the online scientific world to quickly create realistic looking papers on an industrial scale - Taylor and Francis

There's a crisis in particle physics. Researchers hope AI can help. – IEEE Spectrum

Inside OpenAI’s big play for science – MIT Tech Review

Researchers use AI to reverse engineer molecules – Semafor

Resisting AI slop in Science & Higher Ed – Science.org

2025's AI-fueled scientific breakthroughs - Axios

Where Is All the A.I.-Driven Scientific Progress? – New York Times 

The H-Index of Suspicion: How Culture, Incentives, and AI Challenge Scientific Integrity – NEJM

Machine learning helps researchers create lab-grown ‘tiny brains’ to uncover how neurons may malfunction in schizophrenia and bipolar disorder – SciTechDaily  

AI-designed viruses raise fears over creating life – Washington Post  

AI hallucinates because it’s trained to fake answers it doesn’t know - Science.org 

How ChatGPT-5 redefines scientific reproducibility – Elephant in the Lab

The chemistry community should ban drawing chemical structures with generative AI, chemists warn – Chemistry World  

Hack reveals reviewer identities for huge AI conference – Science.org

Researchers call for retraction of two recent Nature studies about AI-generated crystals – Chemical & Engineering News

Science Is Drowning in AI Slop – The Atlantic

18 AI Dangers

AI Companions - Inappropriate dependance on AI, AI control over humans, weakening of human relationships, pornography, suicides, AI delusions, mental health care, human dignity.

AI Divide - Greater inequality, the distance between those who have access to powerful AI & those who don’t.  

Bias - AI can reflect societal prejudices and stereotypes, obscuring underrepresented and marginalized populations.    

Criminals & Crime - Using AI to commit crimes such as cyberattacks, fraud and child pornography.  

Copyright – AI may be trained on copyrighted works and reproduce copyrighted material without permission. 

Deep Fakes - Cyberbullying, nonconsensual pornographic images & video.

Economics - Potential AI-created financial crisis.

Environmental Concerns - Energy consumption, high water usage, and electronic waste.

False information  - Hallucinations can lead to fearmongering, fake news, poor health advice, corrupted learning tools for children, historical misinformation, and false criminal accusations.

Human Labor – Exploitation of workers, human trafficking.

Knowledge Collapse – AI models run out of fresh data, resulting in a feedback loop — dominant ideas are amplified while less widely held or new viewpoints are minimized.

Out of Control AI - Bullying humans, taking action against humans (particularly actions outside of what the AI was designed to do), and AI uprising where bots attempt to gain control outside of human direction. 

Politics - Influencing elections, creating or magnifying international conflict.

Privacy & Security - Facial recognition false arrests, malware, social media, data on children, using AI to hack databases, steal passwords, and personal information has the potential to be shared with third parties. 

Religion - Cultlike dependence on AI, allowing outsized control, treating AI like a Magic 8 Ball, worshipping AI. 

Science - AI Slop may erode scientific progress.

Slop – Low-grade AI content can clog email, social media and the internet. Also, work slop.

Weapons & War - Drones, satellites, biological weapons.