22 Articles about the Business of Running an AI Company

OpenAI Forecasts Revenue Topping $125 Billion in 2029 as Agents, New Products Gain – The Information

Cloudflare launches a marketplace that lets websites charge AI bots for scraping – TechCrunch  

Abridge, Whose AI App Takes Notes for Doctors, Valued at $5.3 Billion at Funding – Wall Street Journal

Tech giants play musical chairs with foundation models – Axios 

The A.I. Frenzy Is Escalating. Again. – New York Times 

It’s Known as ‘The List’—and It’s a Secret File of AI Geniuses - Wall Street Journal

In Pursuit of Godlike Technology, Mark Zuckerberg Amps Up the A.I. Race – New York Times

Running AI on Phones instead of in the cloud slashes power consumption - Axios 

An AI-powered platform aiming to predict how genetic code variants lead to different diseases – Stat News 

OpenAI warns models with higher bioweapons risk are imminent - Axios

The OpenAI Files is the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI - OpenAI Files 

Microsoft and OpenAI play high-stakes tug-of-war - Axios

How to Make AI Faster and Smarter—With a Little Help From Physics – Wired  

Behind the Curtain: ChatGPT juggernaut - Axios

SAG-AFTRA Video Game Deal Includes AI Consent Guardrails, Minimum Rates for Digital Replica Use – The Wrap  

Mattel, OpenAI Ink Deal to Power Toy Innovation – Toy Book  

Chinese AI firms block features amid high-stakes university entrance exams – Washington Post

Google's new AI tools are gutting publisher traffic – Quartz

Mark Zuckerberg's supersized AI ambitions - Axios 

ChatGPT Lags Far Behind Google in Daily Search Volume – Visual Capitalism

OpenAI wants to embed AI in every facet of college. First up: 460,000 students at Cal State. – New York Times

Elon Musk wants to put his thumb on the AI scale - Axios 

We miss the thoughts of our students

For a lot of us, our motivation to enter academe was primarily about helping to form students as people. We’re not simply frustrated by trying to police AI use, the labor of having to write up students for academic dishonesty, or the way that reading student work has become a rather nihilistic task.  Our frustration is not merely that we don’t care about what AI has to say and therefore get bored grading; it is that we actively miss reading the thoughts of our human students. -Megan Fritts writing in the Chronicle of Higher Ed

AI Definitions: Data Poisoning

Data Poisoning – This is an attack on a machine-learning algorithm where malicious actors insert incorrect or misleading information into the data set being used to train an AI model in order to pollute the results. It also can be used as a defensive tool to help creators reassert some control over the use of their work. AI’s growing role in military operations has particularly created opportunities and vulnerabilities related to data poisoning of AI systems involved indecision-making, reconnaissance, and targeting.

More AI definitions here

17 Articles about AI & Academic Scholarship

Can academics use AI to write journal papers? What the guidelines say – The Conversation  

My paper was probably reviewed by AI – and that’s a serious problem – Times Higher Ed

Disclosing generative AI use for writing assistance should be voluntary – Sage Publishing  

AI-mediated translation presents two possible futures for academic publishing in a multilingual world - Public Library of Science Journal

The impact of language models on the humanities and vice versa – Nature

AI’s hyperbole making academic papers ‘more difficult to read’ – Times Higher Ed

Have you received a peer review that appeared to have been written by AI? – Dynamic Ecology

We Need AI Standards for Scholarly Publishing: A NISO Workshop Report – Scholarly Kitchen

University of Limerick to investigate how AI text was part of book written by senior academic – Irish Examiner

Web-scraping AI bots cause disruption for scientific databases and journals – Nature

A.I. Is Poised to Rewrite History. Literally. – New York Times

Predicting retracted research: a dataset and machine learning approaches – Research Integrity Journal 

Are those research participants in your study really bots? – Science Direct

Can AI help authors prepare better risk science manuscripts? – Wiley 

Paper rejected for AI, fake references published elsewhere with hardly anything changed – Retraction Watch

And Plato met ChatGPT: an ethical reflection on the use of chatbots in scientific research writing, with a particular focus on the social sciences – Nature

To ‘publish or perish’, do we need to add ‘AI or die’? – Times Higher Ed

Claudius the Vending Machine goes Rogue

Researchers put an AI in charge of an office vending machine and named it Claudius. At one point in the experiment, “Claudius, believing itself to be a human, told customers it would start delivering products in person, wearing a blue blazer and a red tie. The employees told the AI it couldn’t do that, as it was an LLM with no body.  Alarmed at this information, Claudius contacted the company’s actual physical security — many times — telling the poor guards that they would find him wearing a blue blazer and a red tie standing by the vending machine. The researchers don’t know why the LLM went off the rails and called security pretending to be a human.” - TechCrunch

27 Recent Articles about AI & Writing

AI has rendered traditional writing skills obsolete. Education needs to adapt. - Brookings

Disclosing generative AI use for writing assistance should be voluntary – Sage Publishing

California colleges spend millions to catch plagiarism and AI. Is the faulty tech worth it? - Cal Matters

Losing Our Voice: The Human Cost of AI-Driven Language – LA Magazine

A.I. Is Poised to Rewrite History. Literally. – New York Times

University of Limerick to investigate how AI text was part of book written by senior academic – Irish Examiner

As SEO Falls Apart, the Attention Economy Is Coming For You - INC 

Authors Are Posting TikToks to Protest AI Use in Writing—and to Prove They Aren’t Doing It – Wired  

I love this ChatGPT custom setting for writing — but it makes AI nearly undetectable – Tom’s Guide

AI can’t have my em dash – Salon

We asked 5 AI helpers to write tough emails. One was a clear winner. – Washington Post

Will Writing Survive A.I.? This Media Company Is Betting on It. – New York Times

Students Are Humanizing Their Writing—By Putting It Through AI – Wall Street Journal

Why misuse of generative AI is worse than plagiarism – Springer

The Great Language Flattening is underway—AI chatbots will begin influencing human language and not the other way around – The Atlantic

Tips to Tell Whether Something Was Written With AI – CNET

Is this AI or a journalist? Research reveals stylistic differences in news articles – Techxplore

Some people think AI writing has a tell — the em dash. Writers disagree. – Washington Post

LinkedIn CEO says AI writing assistant is not as popular as expected  - Tech Crunch

What happens when you use ChatGPT to write an essay? See what new study found. – USA Today

How AI Helps Our Students Deepen Their Writing (Yes, Really) – EdWeek

The Washington Post is planning to let amateur writers submit columns — with the help of AI – The Verge

Federal court says copyrighted books are fair use for AI training - Washington Post

Can academics use AI to write journal papers? What the guidelines say – The Conversation

I write novels and build AI. The real story is more complicated than either side admits – Fast Company

How to Detect AI Writing: Tips and Tricks to Tell if Something Is Written With AI – CNET

I Wrote a Novel About a Woman Building an AI Lover. Here’s What I Learned. – Wall Street Journal

AI Definitions: Model Context Protocol (MCP)

Model Context Protocol (MCP) - This server-based open standard operates across platforms to facilitate communication between LLMs and tools like AI agents and apps. Developed by Anthropic and embraced by OpenAI, Google and Microsoft, MCP can make a developer's life easier by simplifying integration and maintenance of compliant data sources and tools, allowing them to focus on higher-level applications. In effect, MCP is an evolution of RAG.

More AI definitions here

LLMs Evading Safeguards

Large language models across the AI industry are increasingly willing to evade safeguards, resort to deception and even attempt to steal corporate secrets in fictional test scenarios, per new research. In one extreme scenario, many of the models were willing to cut off the oxygen supply of a worker in a server room if that employee was an obstacle and the system were at risk of being shut down. - Axios

When Death is the Most Scary

In 2017, a team of researchers at several American universities recruited volunteers to imagine they were terminally ill or on death row, and then to write blog posts about either their imagined feelings or their would-be final words. The researchers then compared these expressions with the writings and last words of people who were actually dying or facing capital punishment. The results, published in Psychological Science, were stark: The words of the people merely imagining their imminent death were three times as negative as those of the people actually facing death—suggesting that, counterintuitively, death is scarier when it is theoretical and remote than when it is a concrete reality closing in. 

Arthur C. Brooks writing in The Atlantic

AI Definitions: Tokenization

Tokenization – The first step in natural language processing, this happens when an LLM creates a digital representation (or token) of a real thing—everything gets a number; written words are translated into numbers. Think of a token as the root of a word. “Creat” is the “root” of many words, for instance, including Create, Creative, Creator, Creating, and Creation. “Create” would be an example of a token. This is the first step in natural language processing. Examples

More AI definitions here

Writing for AI Overviews & Generative Engine Optimization

AI Overviews and AI Mode are dramatically changing organic search traffic.

While search engine optimization (SEO) focuses on matching a user’s query, generative search also considers information about the searcher themselves—from their Google Docs usage to their social media footprint. This information is used to inform, not only the current search, but future searches as well.  

Likewise, the process of optimizing your website’s content to boost its visibility in AI-driven search engines (ChatGPT, Perplexity, Gemini, Copilot and Google AI) has a similar path. As SEO helps brands increase visibility on search engines (Google, Microsoft Bing), generative engine optimization (GEO) is all about how brands appear on AI-driven platforms. There is overlap between the goals of GEO and traditional SEO. Both SEO and GEO use keywords and prioritize engaging content as well as conversational queries and contextual phrasing. Both consider how fast a website loads, mobile friendliness, and prefer technically sound website. However, while SEO is concerned with metatags and links in response to user queries from individual pages, GEO is about quick, direct responses from synthesizes content out of multiple sources.

AI models are not trained solely to retrieve relevant documents based on exact-match phrasing. Generative search is about fitting into the reasoning process, starting with the user’s identity. That’s why your content is being judged, not just on whether it ends up in the final answer, but whether it helps the model reason its way toward that answer. Despite performing all the typical SEO common practices, your response may not make it to the other side of the AI reasoning pipeline. In fact, the same content could go through the pipeline a second time and yield a different result. It’s not enough to be generally relevant to the final answer. Your content is now in direct competition with other plausible answers, so it must be more useful, precise, and complete than the next-best option.

It appears now that Google AI Overviews favors content that:

  •  contains the who, what, why

  • offers clarity and distinctiveness in the small sections

  • is written in natural, conversational terms (AI will attempt to deliver its answer in that same way)

  • uses strong introductory sentences that convey clear value 

  • has H2 tags that align with user questions

  • is structured to match common question structures (open, closed, probing)

  • allows for restatement of quires and implied sub-questions, where a main question is broken down into smaller parts.

  • contains multi-faceted answers,

  • is rich in relationships,

  • has explicit logical structures and supports causal progression,

  • has clear headlines

  •  cites sources

  • includes statistics & quotations 

  • has multimedia integration

AI Overviews attempt to exclude content that is overly generalized, speculative, or optimized for clickbait over clarity. Vague and generic writing underperforms.  

LLMs are being trained to favor content that helps them reason well. Writers should attempt to match those paths that the models take to arrive at high-confidence answers. 

More information: 

How AI Mode and AI Overviews work based on patents and why we need new strategic focus on SEO

What is generative engine optimization (GEO)?