AI automation versus collaboration

"Using AI well will require knowing when to automate versus when to collaborate. This is not necessarily a binary choice, and the boundaries between human expertise and AI’s capabilities for expert judgment will continually evolve as AI’s capabilities advance. Although collaboration is not intrinsically better than automation, premature or excess automation—that is, automation that takes on entire jobs when it’s ready for only a subset of job tasks—is generally worse than collaboration." -David Autor and James Manyika writing in The Atlantic 

24 Articles about AI & Academic Scholarship

Peer Review Paranoia The system is built on trust between scholars. AI is undermining that. – Chronicle of Higher Ed

AI Makes Research Easy. Maybe Too Easy. – Wall Street Journal

AI-generated scientific hypotheses lag human ones when put to the test – Science.org

JAMA Editors on Artificial Intelligence in Peer Review – JAMA  

AI tool labels more than 1000 journals for ‘questionable,’ possibly shady practices - Science.org

AI for Scientific Integrity: Detecting Ethical Breaches, Errors, and Misconduct in Manuscripts – Frontiers  

What counts as plagiarism? AI-generated papers pose new risks - Nature

Image fraud in nuclear medicine research – Springer

Does ChatGPT Ignore Article Retractions and Other Reliability Concerns? - Wiley

NIH to reject research applications written by AI – Beckers Hospital Review

AI-based fake papers are a new threat to academic publishing says journal editor – Times Higher Ed 

AI-Assisted Tools for Scientific Review Writing: Opportunities and Cautions. – ACS Publications 

Comparing AI-generated and human peer reviews: A study on 11 articles – Science Direct 

Evaluating the potential risks of employing large language models in peer review - Wiley

One-fifth of computer science papers may include AI content – Science.org

Artificial intelligence as author: Can scientific reviewers recognize GPT-4o-generated manuscripts? - Science Direct 

Fraudulent Scientific Papers Are Rapidly Increasing, Study Finds – New York Times 

AI can’t learn from what researchers don’t share – Research Professional News

AI content is tainting preprints: how moderators are fighting back.” - Nature

AI can simplify the process enormously and help publishers get ahead of the industry’s upheavals,” says publisher’s head of marketing. – Research Information

 AI Writing Disclosures Are a Joke. Here’s How to Improve Them. - Chronicle of Higher Ed

Make all research data available for AI learning, scientists urge – Research Professional News

Machine learning model flags almost 10 percent of cancer research literature as being paper mill papers – Biorxiv

AI-based research mentors: Plausible scenarios and ethical issues – Taylor & Francis Online  

The DuckDuckGo AI Option

When you use ChatGPT, Claude or Llama technology within DuckDuckGo’s chatbot, the company acts as a middleman that limits what the AI companies know about you and what you’re chatting about. DuckDuckGo says that when you use its chatbot, your conversations aren’t used to train AI for DuckDuckGo or any of its partner AI companies. Your chats may be saved only anonymously for, at most, 30 days, with limited exceptions. And the AI companies don’t have access to personal information such as your device’s unique digital ID number, which could be used to assemble dossiers on your habits. -Washington Post

This is a Mistake

A few years ago, I saw a cartoon of a man on his deathbed saying, “I wish I’d bought more crap.” It has always amazed me that many wealthy people keep working to increase their wealth, amassing far more money than they could possibly spend or even usefully bequeath. One day I asked a wealthy friend why this is so. Many people who have gotten rich know how to measure their self-worth only in pecuniary terms, he explained, so they stay on the hamster wheel, year after year. They believe that at some point, they will finally accumulate enough to feel truly successful, happy, and therefore ready to die. This is a mistake, and not a benign one.  

Arthur C. Brooks writing in The Atlantic

A truth about today’s AI Tools

"A truth about today’s AI tools: They’re not really information experts. They have challenges determining which source is the most authoritative and most recent. It’s fair to ask whether relying on any of these AI tools as your new Google is a good idea. In many ways, AI is best suited for complex questions that take some hunting. In the best cases, AI tools could find needles in a haystack — answers that weren’t obvious in a traditional Google search." - Washington Post

CS Grads Can't Find Jobs

A recent graduate triple-majored in computer science, math, and computational science and has completed the coursework for a computer-science Ph.D. He would prefer to work instead of finishing his degree, but he has found it almost impossible to secure a job. “We’re in an AI revolution, and I am a specialist in the kind of AI that we’re doing the revolution with, and I can’t find anything.” -The Atlantic

20 Recent Articles about the Impact of AI on Students

What the panic about kids using AI to cheat gets wrong - Vox 

How AI Is Changing—Not ‘Killing’—College – Inside Higher Ed

AI Makes Research Easy. Maybe Too Easy. – Wall Street Journal 

The Computer-Science Bubble Is Bursting – The Atlantic

Students Are Using ChatGPT to Write Their Personal Essays Now – Chronicle of Higher Ed

These workers don’t fear artificial intelligence. They’re getting degrees in it. – Washington Post

Almost all the class of 2026 are using AI to do their work – The Atlantic

Duke Just Introduced An Essay Question About AI—Here’s How To Tackle It - Forbes

ChatGPT’s Study Mode Is Here. It Won’t Fix Education’s AI Problems – Wired  

AI is helping students be more independent, but the isolation could be career poison – The Markup

I'm a college writing professor. How I think students should use AI this fall - Mashable

ChatGPT's new study mode won't give you the answers - Axios

University students feel ‘anxious, confused and distrustful’ about AI in the classroom and among their peers – The Conversation

I Teach Creative Writing. This Is What A.I. Is Doing to Students. – New  York Times

How Are Students Really Using AI? Here’s what the data tell us. - Chronicle of Higher Ed

So long, study guides? The AI industry is going after students – NPR

At one elite college, over 80% of students now use AI – but it’s not all about outsourcing their work - The Conversation

Students have been called to the office — and even arrested — for AI surveillance false alarms – Associated Press  

AI in education's potential privacy nightmare - Axios 

AI to the Rescue It’s an all-purpose study tool — it’s changing students’ relationships with professors & peers - Chronicle of Higher Ed

Selling Out

We "sell out" whenever we fail to take ownership of who we are. It's much easier to default to the expectations of friends/work/society/church rather than taking responsibility for our thinking and actions. Turning control over of what we have been entrusted with to someone (or something) else is an attempt to take the responsibility off our shoulders, so there’s someone else to blame.  

AI Advice for Students

AI Advice for Students

1- Think Beyond Academic Integrity

         Not just “Is this cheating or not cheating?”

         But also, “Am I taking the opportunity to learn, practice, and cultivate my skills?”

To some students, college now feels like, “How well I can use ChatGPT.” Others describe writing essays as a coordination problem: get the prompt, feed it to the bot, skim the output, add some filler, hit submit. No thinking required, just interface management. 

2-Define Your own Educational goals

Ask yourself: “Besides grades, what are my goals as a student?”

Prioritize learning and skill development

Seize opportunities to get the practice you need to become a better thinker, writer, and communicator. 

3-Prompt to Challenge your Thinking

Instead of outsourcing your thinking (“Suggest a thesis statement I can use for my essay.”). Look for ways to think critically about the subject (“Ask me tough questions to help me figure out my thesis statement.”).

Don’t just ask, “Am I outsourcing the writing to AI?” Ask, “Am I outsourcing the thinking to AI?” We must use AI to expand our mind’s capacity to engage, rather than using it to outsource our thinking.  

4- Focus on AI Literacy & Integration

Unless you want to build AI systems and become a data scientist, focus on taking outdated processes and updating them to make use of the available AI tools. Understanding the benefits and limitations of AI in light of ethics should be the goal, along with figuring out how to mesh it into your workday. 

5- Double Down on your Humanity

•  We can’t let it strip us of our humanity.

•  Optimistically, AI may be “a piece of technology that, instead of replacing humanity, amplifies it.”

•  We must retain oversight & not lose ourselves by depending on the machine.

•  Doubling down on what makes you human may be what saves you from being replaced or minimized by AI. 

6- Get Well-rounded         

Be well- rounded in liberal arts: think of your gen ed classes as now core classes. Focus specifically on growing these skills: analytic thinking, creativity, information literature, resilience, agility, leadership, self-motivation, empathy, curiosity. Their value will rise as AI takes over routine tasks.  

7- Distinguish between AI-generated content, AI-assisted content, & AI-supplemented content

Group A ❌                    Group B ✅                              Group C 🤔

AI-generated content              AI-assisted content/writing                AI-supplement content

Facilitated writing/learning    

AI-generated content ❌ is entirely produced by the AI or sections are produced by the AI, based on detailed instructions (prompts) provided by the author. Some AI is best thought of as a set of automation tools that function as closed systems that do their work without oversight—like ATMs and dishwashers. 

In academia, it is not acceptable under normal circumstances unless there are significant and clear reason why this was necessary. However, in business, it is likely to be treated as acceptable when the content is merely informational and not intended to be creative. The focus in this situation is accuracy and speed with minimal effort as opposed to authenticity. For instance, a summary of a business meeting or an email answering a particular question about the business, where it is assumed, the writer may incorporate AI-generated content.

Group B ✅ is work that is predominantly written by an individual but has been improved with the aid of AI tools. AI is part of the process. The author remains in control, and the AI merely acts as a polishing tool. As opposed to automation tools, these collaboration tools—like chain saws and word processors. In any given application, AI is going to automate or it’s going to collaborate, depending on how we design it and how someone chooses to use it.

This kind of assistance is generally accepted by most publishers as well as the Committee on Publication Ethics, without the need for formal disclosure. This includes: creating outlines, improving clarity, grammar, summarizing, brainstorming, generating transcription, condensing notes, creating study guides, practice questions, editing, and suggesting alternative approaches to a problem. 

Group C 🤔 includes changing phrasing, generating a citation list, revising sentence structure, reducing word count, etc. Writers and publishers disagree about whether using AI in this way is ethical or not.

22 Articles about Relationships with AI

OpenAI to safeguard ChatGPT for teens and people in crisis - Axios

ChatGPT-powered dolls are becoming caregivers in South Korea – Semafor

AI has passed the aesthetic Turing Test − and it’s changing our relationship with art – The Conversation

AI Chatbots Have New Boundaries, So I Tried to Get One to Break Up With Me - PopSugar

Dating an AI: How Much Is It Really Happening? – Wall Street Journal  

The personhood trap: How AI fakes human personality - ArsTechnica 

A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich. – Wall Street Journal

They're Stuffed Amimals: They're also Chatbots – New York Times

ChatGPT is not your therapist – The Miami Hurricane

OpenAI Is Updating ChatGPT to Better Support Users in Mental Distress – Wall Street Journal  

What My Daughter Told ChatGPT Before She Took Her Life – New York Times 

Meta’s flirty AI chatbot invited a retiree to New York. He never made it home. – Reuters

The Looming Social Crisis of AI Friends and Chatbot Therapists – Derek Thompson

Illinois blocks AI from being your therapist – Axios  

Support Group Launches for People Suffering "AI Psychosis” – Futurism

Teens say they are turning to AI for friendship – Associated Press

What Would a Real Friendship With A.I. Look Like? Maybe Like Hers. – New York Times

He Had Dangerous Delusions. ChatGPT Admitted It Made Them Worse. - Wall Street Journal

Large language models are proficient in solving and creating emotional intelligence tests – Nature

A.I. Griefbots Are Just Our Latest Attempt to Talk to the Dead – New York Times

He said, she said, it said: I used ChatGPT as a couple's counselor. How did we fare? – NPR  

Can an AI Companion Substitute for Real Human Relationships? – Psychology Today

Does AI Replace the Expert?

The question is not whether AI can do things that experts cannot do on their own—it can. Expert humans often bring something that today’s AI models cannot: situational context, tacit knowledge, ethical intuition, emotional intelligence, and the ability to weigh consequences that fall outside the data. The value is not in substituting one expert for another, or in outsourcing fully to the machine, or indeed in presuming the human expertise will always be superior, but in leveraging human and rapidly-evolving machine capabilities to achieve best results. -David Autor and James Manyika writing in The Atlantic