AI Definitions: Compression-meaning tradeoff

Compression-meaning tradeoff – This is the balance between reducing data size (compression) and preserving the original information (meaning). To manage information overload, humans group items into categories. For instance, we think of poodles and bulldogs as dogs. We balance this compression with details that separate them: size, nose, tales, fur types, etc. LLMs, on the other hand, attempt to maintain this balance between compressing information and original meaning differently. LLMs have an aggressive compression approach which allows them to store vast amounts of knowledge. However, it also contributes to unpredictability and failures. This tension has led many data scientists to conclude that better alignment with human cognition would result in more capable and reliable AI systems.

More AI definitions here

22 Articles about the Business of Running an AI Company

OpenAI Forecasts Revenue Topping $125 Billion in 2029 as Agents, New Products Gain – The Information

Cloudflare launches a marketplace that lets websites charge AI bots for scraping – TechCrunch  

Abridge, Whose AI App Takes Notes for Doctors, Valued at $5.3 Billion at Funding – Wall Street Journal

Tech giants play musical chairs with foundation models – Axios 

The A.I. Frenzy Is Escalating. Again. – New York Times 

It’s Known as ‘The List’—and It’s a Secret File of AI Geniuses - Wall Street Journal

In Pursuit of Godlike Technology, Mark Zuckerberg Amps Up the A.I. Race – New York Times

Running AI on Phones instead of in the cloud slashes power consumption - Axios 

An AI-powered platform aiming to predict how genetic code variants lead to different diseases – Stat News 

OpenAI warns models with higher bioweapons risk are imminent - Axios

The OpenAI Files is the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI - OpenAI Files 

Microsoft and OpenAI play high-stakes tug-of-war - Axios

How to Make AI Faster and Smarter—With a Little Help From Physics – Wired  

Behind the Curtain: ChatGPT juggernaut - Axios

SAG-AFTRA Video Game Deal Includes AI Consent Guardrails, Minimum Rates for Digital Replica Use – The Wrap  

Mattel, OpenAI Ink Deal to Power Toy Innovation – Toy Book  

Chinese AI firms block features amid high-stakes university entrance exams – Washington Post

Google's new AI tools are gutting publisher traffic – Quartz

Mark Zuckerberg's supersized AI ambitions - Axios 

ChatGPT Lags Far Behind Google in Daily Search Volume – Visual Capitalism

OpenAI wants to embed AI in every facet of college. First up: 460,000 students at Cal State. – New York Times

Elon Musk wants to put his thumb on the AI scale - Axios 

We miss the thoughts of our students

For a lot of us, our motivation to enter academe was primarily about helping to form students as people. We’re not simply frustrated by trying to police AI use, the labor of having to write up students for academic dishonesty, or the way that reading student work has become a rather nihilistic task.  Our frustration is not merely that we don’t care about what AI has to say and therefore get bored grading; it is that we actively miss reading the thoughts of our human students. -Megan Fritts writing in the Chronicle of Higher Ed

AI Definitions: Data Poisoning

Data Poisoning – This is an attack on a machine-learning algorithm where malicious actors insert incorrect or misleading information into the data set being used to train an AI model in order to pollute the results. It also can be used as a defensive tool to help creators reassert some control over the use of their work. AI’s growing role in military operations has particularly created opportunities and vulnerabilities related to data poisoning of AI systems involved indecision-making, reconnaissance, and targeting.

More AI definitions here

17 Articles about AI & Academic Scholarship

Can academics use AI to write journal papers? What the guidelines say – The Conversation  

My paper was probably reviewed by AI – and that’s a serious problem – Times Higher Ed

Disclosing generative AI use for writing assistance should be voluntary – Sage Publishing  

AI-mediated translation presents two possible futures for academic publishing in a multilingual world - Public Library of Science Journal

The impact of language models on the humanities and vice versa – Nature

AI’s hyperbole making academic papers ‘more difficult to read’ – Times Higher Ed

Have you received a peer review that appeared to have been written by AI? – Dynamic Ecology

We Need AI Standards for Scholarly Publishing: A NISO Workshop Report – Scholarly Kitchen

University of Limerick to investigate how AI text was part of book written by senior academic – Irish Examiner

Web-scraping AI bots cause disruption for scientific databases and journals – Nature

A.I. Is Poised to Rewrite History. Literally. – New York Times

Predicting retracted research: a dataset and machine learning approaches – Research Integrity Journal 

Are those research participants in your study really bots? – Science Direct

Can AI help authors prepare better risk science manuscripts? – Wiley 

Paper rejected for AI, fake references published elsewhere with hardly anything changed – Retraction Watch

And Plato met ChatGPT: an ethical reflection on the use of chatbots in scientific research writing, with a particular focus on the social sciences – Nature

To ‘publish or perish’, do we need to add ‘AI or die’? – Times Higher Ed

Claudius the Vending Machine goes Rogue

Researchers put an AI in charge of an office vending machine and named it Claudius. At one point in the experiment, “Claudius, believing itself to be a human, told customers it would start delivering products in person, wearing a blue blazer and a red tie. The employees told the AI it couldn’t do that, as it was an LLM with no body.  Alarmed at this information, Claudius contacted the company’s actual physical security — many times — telling the poor guards that they would find him wearing a blue blazer and a red tie standing by the vending machine. The researchers don’t know why the LLM went off the rails and called security pretending to be a human.” - TechCrunch

27 Recent Articles about AI & Writing

AI has rendered traditional writing skills obsolete. Education needs to adapt. - Brookings

Disclosing generative AI use for writing assistance should be voluntary – Sage Publishing

California colleges spend millions to catch plagiarism and AI. Is the faulty tech worth it? - Cal Matters

Losing Our Voice: The Human Cost of AI-Driven Language – LA Magazine

A.I. Is Poised to Rewrite History. Literally. – New York Times

University of Limerick to investigate how AI text was part of book written by senior academic – Irish Examiner

As SEO Falls Apart, the Attention Economy Is Coming For You - INC 

Authors Are Posting TikToks to Protest AI Use in Writing—and to Prove They Aren’t Doing It – Wired  

I love this ChatGPT custom setting for writing — but it makes AI nearly undetectable – Tom’s Guide

AI can’t have my em dash – Salon

We asked 5 AI helpers to write tough emails. One was a clear winner. – Washington Post

Will Writing Survive A.I.? This Media Company Is Betting on It. – New York Times

Students Are Humanizing Their Writing—By Putting It Through AI – Wall Street Journal

Why misuse of generative AI is worse than plagiarism – Springer

The Great Language Flattening is underway—AI chatbots will begin influencing human language and not the other way around – The Atlantic

Tips to Tell Whether Something Was Written With AI – CNET

Is this AI or a journalist? Research reveals stylistic differences in news articles – Techxplore

Some people think AI writing has a tell — the em dash. Writers disagree. – Washington Post

LinkedIn CEO says AI writing assistant is not as popular as expected  - Tech Crunch

What happens when you use ChatGPT to write an essay? See what new study found. – USA Today

How AI Helps Our Students Deepen Their Writing (Yes, Really) – EdWeek

The Washington Post is planning to let amateur writers submit columns — with the help of AI – The Verge

Federal court says copyrighted books are fair use for AI training - Washington Post

Can academics use AI to write journal papers? What the guidelines say – The Conversation

I write novels and build AI. The real story is more complicated than either side admits – Fast Company

How to Detect AI Writing: Tips and Tricks to Tell if Something Is Written With AI – CNET

I Wrote a Novel About a Woman Building an AI Lover. Here’s What I Learned. – Wall Street Journal

31 Articles from June about AI & Data Science

How To Build RAG Applications Using Model Context Protocol 

AI Definitions: Model Context Protocol

Understanding Model Context Protocol

Towards Scalable and Generalizable Earth Observation Data Mining via Foundation Model Composition

5 R&D jobs that may be lost to AI and 5 that it could create 

AI Definitions: predictive analytics

AI and the State of Software Development

AI dominates where work is structured and verifiable, but here’s where it falters    

Coding agents have crossed a chasm

A Practical Guide to Multimodal Data Analytics 

Chinese spy services have invested heavily in artificial intelligence 

How much LLM’s training data is nearly identical to the original data?

AI Definition: RAGs Retrieval augmented generations

Why You Need RAG to Stay Relevant as a Data Scientist 

AI Definitions: Agentic AI

A possilbe “fresh source of inspiration” for AI technology 

Generative AI for Multimodal Analytics 

AI definitions: "Training data" 

17 Articles about How AI Works

How agentic AI is causing data scientists to think behavioral

Claude Gov is designed specifically for U.S. defense and intelligence agencies

AI definitions: Narrow AI   

5 Powerful Ways to Use Claude 4 as a data scientist

How vibe coding is tipping Silicon Valley’s scales of power

Agentic RAG Applications

AI Definitions: Neural Networks 

American satellite imaging companies are witnessing a boom in demand from unexpected customers: those based abroad

An AI Vibe Coding Guide for Data Scientists

The foundations of designing an AI agent

New Google app lets you download and run AI models on your phone without the internet 

The Rise of Automated Machine Learning

AI Definitions: Model Context Protocol (MCP)

Model Context Protocol (MCP) - This server-based open standard operates across platforms to facilitate communication between LLMs and tools like AI agents and apps. Developed by Anthropic and embraced by OpenAI, Google and Microsoft, MCP can make a developer's life easier by simplifying integration and maintenance of compliant data sources and tools, allowing them to focus on higher-level applications. In effect, MCP is an evolution of RAG.

More AI definitions here

7 Free Webinars this Week about AI, Journalism & Media

Mon, June 30 - (Mis)use of Data Protection Laws to Suppress Public-Interest Journalism

What: Gain critical insights from legal experts and investigative journalists who have experienced these tactics first-hand. You’ll leave with a deeper understanding of:  How international data protection frameworks interact with press freedom The growing use of privacy laws in strategic legal attacks on journalists Journalistic exemptions and legal safeguards — and where they fall short What journalists and legal professionals can do to push back.

Who: Melinda Rucz – PhD Researcher, University of Amsterdam; Beatrix Vissy, PhD – Strategic Litigation Lead, Hungarian Civil Liberties Union; Bojana Jovanović – Deputy Editor, KRIK, Serbia; Hazal Ocak – Feelance Investigative Journalist, Türkiye; Grace Linczer – Membership and Engagement Manager, IPI. 

When: 8 am, Eastern

Where: Zoom

Cost: Free

Sponsors: Media Defence, International Press Institute  

More Info

 

Mon, June 30 - AI in Scientific Writing

What: This talk explores the evolving role of Generative AI in academic writing and publishing. Attendees will gain an understanding of how AI tools can enhance writing efficiency, improve clarity, and streamline the publication process. We will examine the benefits and limitations of using AI in scholarly communication, along with key ethical considerations and  responsible use practices. The session will also cover current editorial policies, publishers’ perspectives on AI generated content, and the growing concern over paper mills. Strategies and mitigations to uphold research integrity in response to these challenges will be discussed.

Who: Maybelline Yeo, Trainer and Editorial Development Advisor, Researcher Training Solutions, Springer Nature.

When: 9:30 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Springer Nature

More Info

 

Tue, July 1 - Learn the Basics of Solutions Journalism

What: This one-hour webinar will explore the principles and pillars of solutions journalism. We will discuss its importance, outline key steps for reporting a solutions story, and share tips and resources for journalists investigating responses to social problems. We will also introduce additional resources, such as the Solutions Story Tracker, a database with over 17,000 stories tagged by beat, publication, author, location and more, along with a virtual heat map highlighting successful efforts worldwide.    

Who: Jaisal Noor, SJN's democracy cohort manager, and Ebunoluwa Olafusi of TheCable.

When: 9 am, Eastern

Where: Zoom

Cost: Free

Sponsor: Solutions Journalism Network

More Info

 

Tue, July 1 - AI-Powered Visual Storytelling for Nonprofits

What: In this hands-on workshop, participants will create impactful visuals, infographics, and videos tailored to their mission and campaigns. Attendees will also explore Tapp Network’s AI services to understand how these tools can elevate their content strategies..

Who: Tareq Monuar Web Developer; Lisa Quigley Tapp Network  Director of Account Strategy.

When: 1 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Tech Soup

More Info

 

Tue, July 1 - Journalist Development Series

What: A once-monthly webinar as an opportunity for general professional development for members and the mentorship program community.

Who: Chris Marvin, a combat-wounded Army veteran and nationally recognized narrative strategist who helps shape powerful, purpose-driven storytelling at the intersection of media, public service, and social change.

When: 6 pm, Eastern

Where: Zoom

Cost: Free for members

Sponsors: Military Veterans in Journalism, News Corp

More Info

 

Wed, July 2 - Business Decisions with AI: Causality, Incentives & Data

What: How complex settings in tech companies create additional complications to measure and evaluate business decisions. Drawing on cutting-edge research on the intersection of AI and causal inference, Belloni will demystify how to properly measure the efficacy of these decisions and show how AI can help shape better implementation for a variety of applications.

Who: Alexandre Belloni, the Westgate Distinguished Professor of Decision Sciences and Statistical Science at Duke University and an Amazon Scholar WW FBA.

When: 12:30, Eastern

Where: Linkedin Live

Cost: Free

Sponsor: Duke University’s Fuqua School of Business

More Info

 

Wed, July 3 - Reel Change: Nonprofit Video Storytelling for Social Impact

What: Learn to create impactful video stories that amplify your nonprofit’s mission, engage donors, and inspire action. This training provides actionable strategies to craft emotional, audience-driven narratives, empowering you to deepen connections and drive meaningful support for your organization.

Who: Matthew Reynolds, founder of Rustic Roots, a video production agency; Dani Cluff is the Channel Marketing Coordinator at Bloomerang.

When: 2 pm, Eastern

Where: Zoom

Cost: Free

Sponsor: Bloomerang

More Info

LLMs Evading Safeguards

Large language models across the AI industry are increasingly willing to evade safeguards, resort to deception and even attempt to steal corporate secrets in fictional test scenarios, per new research. In one extreme scenario, many of the models were willing to cut off the oxygen supply of a worker in a server room if that employee was an obstacle and the system were at risk of being shut down. - Axios

When Death is the Most Scary

In 2017, a team of researchers at several American universities recruited volunteers to imagine they were terminally ill or on death row, and then to write blog posts about either their imagined feelings or their would-be final words. The researchers then compared these expressions with the writings and last words of people who were actually dying or facing capital punishment. The results, published in Psychological Science, were stark: The words of the people merely imagining their imminent death were three times as negative as those of the people actually facing death—suggesting that, counterintuitively, death is scarier when it is theoretical and remote than when it is a concrete reality closing in. 

Arthur C. Brooks writing in The Atlantic