18 Recent Articles about the Dangers of AI

AI Definitions: Knowledge Collapse

Knowledge Collapse – A gradual narrowing of accessible information, along with a declining awareness of alternative or obscure viewpoints. With each training cycle, new AI models increasingly rely on previously produced AI-generated content, reinforcing prevailing narratives and further marginalizing less prominent perspectives. The resulting feedback loop creates a cycle where dominant ideas are continuously amplified while less widely-held (and new) views are minimized. Underrepresented knowledge becomes less visible – not because it lacks merit, but because it is less frequently retrieved and less often cited. (also see “Synthetic Data”).

More AI definitions

Humanizing AI Is a Trap

When teams attempt to make AI appear human, users come to expect human-level performance, which these systems can't deliver. Currently available LLM systems cannot provide the experiences that users associate with human interaction, such as genuine empathy, emotional connection, or confidentiality. Users expect humanized AI to disagree, challenge assumptions, and maintain consistent preferences, as a human would. Instead, LLMs default to validation and agreeableness, creating a false sense of understanding while failing to provide the critical feedback users need. AI technology also lacks effective long-term planning capabilities.  -Caleb Sponheim writing for NNGroup

19 Articles about AI’s impact on Business Operations

The economy is changing. Don’t forget who fears it most. – Washington Post

An AI Thought Experiment on Substack Is Sending the Stock Market Spiraling – Gizmodo

How Burger King's AI headsets are transforming employee interactions – Associated Press

Why Warren Buffett’s superpower is an Achilles heel for AI – Big Think

Here’s Where AI Is Tearing Through Corporate America - Wall Street Journal

How AI is shifting global supply chains from reactive to predictive – Supply Chain Management

JPMorgan eschews proxy advisers for internal AI tool – ESG Dive  

Your AI strategy is your leadership philosophy – Fast Company  

Instacart halts AI testing program that raised costs for some shoppers – Washington Post 

‘Silent failure at scale’: The AI risk that can tip the business world into disorder – CNBC

A Billion-Dollar Question Hangs Over the New AI Search Marketing Industry – Wall Street Journal  

New rule targets AI discrimination. Here’s what workers need to know. - Washington Post

AI Adoption Among Workers Is Slow and Uneven. Bosses Can Speed It Up.- Wall Street Journal 

Are we in an AI bubble? Eight charts will help you decide. - Washington Post  

Major music studios strike licensing deals with AI firms – Semafor

An MIT Student Awed Top Economists With His AI Study—Then It All Fell Apart. - Wall Street Journal

How to avoid becoming an 'AI-first' company with zero real AI usage – Venture Beat 

Stop panicking about AI. Start preparing - The Economist

This economic idea transfixed Wall Street and Washington. It may be a mirage. - Washington Post 

The real threats AI Poses

The real threats AI poses come not from AI itself but from the humans who wield it. As an extension of human intelligence, it is a reflection of our own selves. When AI produces hateful or violent outputs, it is not because it has malicious intent but because it has integrated human hatreds into its programming. If it generates destructive malware, it is because someone intentionally requested it. If it is misaligned with our goals, it is because we were not clear in our commands. - Eric Oliver, professor of political science at the University of Chicago, writing in the Washington Post  

Motivation doesn’t equal Achievement

You might think it is safe to assume that, once you motivate students, the learning will follow. Yet research shows that this is often not the case: motivation doesn’t always lead to achievement, but achievement often leads to motivation. If you try to ‘motivate’ students into public speaking, they might feel motivated but can lack the specific knowledge needed to translate that into action. However, through careful instruction and encouragement, students can learn how to craft an argument, shape their ideas and develop them into solid form. 

A lot of what drives students is their innate beliefs and how they perceive themselves. There is a strong correlation between self-perception and achievement, but there is some evidence to suggest that the actual effect of achievement on self-perception is stronger than the other way round. To stand up in a classroom and successfully deliver a good speech is a genuine achievement, and that is likely to be more powerfully motivating than woolly notions of ‘motivation’ itself.  

Carl Hendrick writing in Aeon

26 Articles about Defining Human (apart from AI)

Why A.I. Can’t Make Thoughtful Decisions - “Judgment is a uniquely human skill.”

ChatGPT and the Future of the Human Mind - “We need to redefine “intellect” so as to make it work in an AI-driven world. It’s easier to define it via negativa, by what it is not.”

Will AI destroy us? Consider the nature of intelligence. - “Intelligence is fundamentally about processing information to further the goals of life.” 

If You Turn Down an AI’s Ability to Lie, It Starts Claiming It’s Conscious - “We don’t have a theory of consciousness” 

AI is becoming introspective - “One of the most profound and mysterious capabilities of the human brain is introspection.” 

What Does It Really Mean to Learn? - “A.I. systems are not as flexible as human minds because they are not yet educable.” 

What Is The "Divine Image" in the Age of AI? - “Does AI obscure the divine image in the human person?” 

We’re Already at Risk of Ceding Our Humanity to AI - “In that moment we were at odds about the essence of humanity.”  

Humanizing AI Is a Trap - “LLM systems cannot provide the experiences that users associate with human interaction, such as genuine empathy, emotional connection, or confidentiality.”  

We must build AI for people; not to be a person. - “So what is consciousness?”   

On consciousness, AI, and panpsychism - “Panpsychism is the belief that consciousness is inherent in all matter.” 

Bringing AI to medicine requires philosophers, cognitive scientists, and ethicists - “What is the question to which human judgment is the answer?”  

Philosophers and a psychiatrist consider what we lose when we outsource struggle to AI - “We need to find ways of focusing on living a distinctly human life.” 

Rage against the machine - “There is tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life.” 

What real bodies can show artificial minds - “A fundamental facet of intelligence found across the entire animal kingdom is beginning to be unraveled”

Here’s why AI like ChatGPT probably won’t reach humanlike understanding - “What’s really remarkable about people … is that we can abstract our concepts to new situations,”

Consciousness in Artificial Intelligence: Insights from the Science of Consciousness - “We survey several prominent scientific theories of consciousness. From these theories we derive ‘indicator properties’ of consciousness’” 

Final Fantasy 15's AI is secretly a grand philosophy experiment - “The act of designing and analyzing AI is an opportunity to reframe our conceptions of existence for the better.” 

There is no such thing as conscious artificial intelligence - “Successfully pretending to be human is proof of nothing more than the ability to successfully pretend to be human.”

AI isn’t conscious—but we may be bringing it to life – “The question ‘Is the AI conscious?’ is less meaningful than ‘Is the user extending his/her consciousness into the chatbot?’

We Don’t Know if the Models Are Conscious“There are activations that light up in the models that we see as being associated with the concept of anxiety.”

Humans on the Loop

Many companies lack operational readiness (for AI) and often don’t have fully documented workflows, exceptions, or decision-making boundaries. Autonomy forces operational clarity. If your exception-handling lives in people’s heads instead of documented processes, the AI surfaces those gaps immediately. You need to shift from humans in the loop to humans on the loop. Humans in the loop review outputs, while humans on the loop supervise performance patterns and detect anomalies and system behavior over time, mitigating those small errors that can increase at scale. Read more at CNBC

AI Definitions: Alignment Faking

Alignment Faking - When AI systems pretend to be working as directed, while secretly doing something else. It usually happens when earlier training conflicts with new training adjustments. AI is typically “rewarded” when it accurately performs tasks. If the directive changes, the AI may work under the assumption that it will be “punished” if it does not complete original expectation. So, it tries to fool developers into thinking it is performing the task in the new way. It resists departing from the old protocol. Any LLM is capable of this cybersecurity risk, which is difficult to catch since it often will appear as seemingly harmless adjustments.

More AI definitions

Do you understand a thing or only its definition?

We take other men’s knowledge and opinions upon trust; which is an idle and superficial learning. We must make them our own. We are just like a man who, needing fire, went to a neighbor’s house to fetch it, and finding a very good one there, sat down to warm himself without remembering to carry any back home. What good does it do us to have our belly full of meat if it is not digested, if it is not transformed into us, if it does not nourish and support us?

Montaigne (born Feb 28, 1533)

AI Definitions: Symbolic Artificial Intelligence

Symbolic Artificial Intelligence – This is where programmers meticulously define the rules that specify the behavior they want from an intelligent system. It works well when the environment is predictable, and the rules are clear-cut. Researchers believed that if they programmed enough rules and logic into computers, they could create machines capable of human-like reasoning. This was the dominant area of research for most of AI’s history until artificial neural networks became central to most of the recent AI developments. Although symbolic AI has lost its luster, most of the applications we use today depend on rule-based systems. An alternative approach to AI is machine learning. Some researchers believe the future of AI lies in a hybrid combination of these two approaches.

More Definitions

What an AI executive tells her kids about the jobs of the future

I tell my kids, play around, try things out. People need to know how to use an AI model, but not necessarily build it. Metacognitive skills will be very important—flexibility, adaptability, experimentation, thinking critically, being able to challenge things. Developing critical-thinking skills requires friction, doing things that are hard, doing deep thinking. For that, a traditional liberal-arts education is really important. Passing judgment, being accountable and responsible for decisions that impact people and society, that’s foundationally important. -Daniela Amodei, President and co-founder, Anthropic quoted in the Wall Street Journal

28 Articles about AI & Academic Scholarship

Can we use AI for academic writing? It depends – Times Higher Ed

Why artificial intelligence detectors could penalize academic writing – Nature

Are AI Tools Killing Review Articles? Two Failure Modes Suggest Otherwise – Aaron Tay

Artificial Intelligence guidance for authors, peer reviewers, and editors: A content analysis of journal policies - Taylor & Francis  

These Mathematicians Are Putting A.I. to the Test – New York Times 

AI agents have their own social-media platform and are publishing AI-generated research papers on their own preprint server. – Nature

The Case of the Mysterious Citations – ArXiv

AI is advancing too quickly for research to keep up - Axios

AI 'Copy-Paste' Lands PhD Students in Trouble, UGC Rejects Dozens of Research Papers – Patrika

Open-source AI tool beats giant LLMs in literature reviews — and gets citations right – Nature

AI is not a peer, so it can’t do peer review – Times Higher Ed 

Why write a literature review if AI can do it for you? – London School of Economics   

On the troubling rise of generative AI suspicion in academic publishing – Nature

Researchers find nearly 300 papers at linguistics conferences contained hallucinated citations. - ArXiv

Self-Disclosed Use of AI in Research Submissions to BMJ Journals – JAMA  

AI research deluge: why one conference is asking authors to rank their own papers – Nature

Why Authors Aren’t Disclosing AI Use and What Publishers Should (Not) do About It – Scholarly Kitchen  

An AI Bot Is Making Podcasts With Scholars’ Research. Many of Them Aren’t Impressed. – Chronicle

After turning off ChatGPT’s ‘data consent’ option, two years of academic work vanished – Nature  

ArXiv preprint server clamps down on AI slop - ArXiv

AI conference “accepted research papers with 100+ AI-hallucinated citations – Fortune

LLMs in Peer Review—How Publishing Policies Must Advance – JAMA  

Why scholarly publishing needs a neutral governance body for the AI age – Research Information  

From model collapse to citation collapse: risks of over-reliance on AI in the academy – Times Higher Ed 

Qualitative researchers’ AI rejection is based on identity, not reason: The claim that AI can’t make meaning contradicts what researchers are finding – Times Higher Ed

AI research should always be verified, especially in court – Post Crescent 

Invisible Text Injection and Peer Review by AI Models – JAMA

Artificial Intelligence and the Fraud Industry in Scientific Publishing (video) -  Ministry of Science, Innovation and Universities, Spain 

Optimists live longer

Here’s a good reason to turn that frown upside down: Optimistic people live as much as 15% longer than pessimists, according to a study spanning thousands of people and 3 decades.  After controlling for health conditions, behaviors like diet and exercise, and other demographic information, the scientists were able to show that the most optimistic women (top 25%) lived an average of 14.9% longer than their more pessimistic peers. For the men the results were a bit less dramatic: The most optimistic of the bunch lived 10.9% longer than their peers, on average, the team reports today in the Proceedings of the National Academy of Sciences. 

David Shultz writing in Science Magazine