The AI Flattery Trap

Managing negative emotions is a fundamental function of the brain, enabling you to build resilience and learn. But experts say that A.I. chatbots allow you to bypass that emotional work, instead lighting up your brain’s reward system every time they agree with you, much like with social media “likes” and self-affirmations. That means A.I. chatbots can quickly become echo chambers, potentially eroding critical thinking skills and making you less willing to change your mind. -New York Times

18 Articles about AI’s impact on College Faculty & Administrators

Can Colleges Be Run Using AI? – Chronicle of Higher Ed 

Dozens of fake college websites built with or supplemented by gen AI – Inside Higher Ed

The AI Takeover of Education Is Just Getting Started – The Atlantic

Student Loan Defaults Threaten Federal Aid At 1,100 Colleges – Forbes  

African universities risk being left behind in AI era - Semafor 

A gigantic public experiment that no one has asked for – Popular Information

In California, Colleges Pay a Steep Price for Faulty AI Detectors – Undark  

Universities are rethinking computer science curriculum in response to AI tools – Tech Spot 

‘It’s just bots talking to bots’: AI is running rampant on college campuses as students and professors alike lean on the tech - Fortune

How Do You Teach Computer Science in the A.I. Era? - The New York Times

California colleges spend millions to catch plagiarism and AI. Is the faulty tech worth it? – Cal Matters

AI usage in jobs could lead to AI ‘trade schools,’ expert says - Semafor                  

How One College Library Plans to Cut Through the AI Hype - Inside Higher Ed

The impact of language models on the humanities and vice versa – Nature

Universities in the UL ‘At Risk of Overassessing’ in Response to AI - Inside Higher Ed

AI in education's potential privacy nightmare - Axios

When AI rejects your grant proposal: algorithms are helping to make funding decisions – Nature

Faculty Latest Targets of Big Tech’s AI-ification of Higher Ed - Inside Higher Ed

AI definitions: Workslop

Workslop - AI-generated content that masquerades as good work, but lacks substance and does not meaningfully advance a given task. The overwritten language includes unnecessarily long words and empty phrases, similar to student submissions focused on meeting an assignment’s length requirement rather than making every sentence and bullet point push the ball forward.

More AI definitions here

Each time you lie

Each time you lie, even if you’re not caught, you “become a little more of this ugly thing: a liar. Character is always in the making, with each morally valanced action, whether right or wrong, affecting our characters, the people who we are. You become the person who could commit such an act, and how you are known in the world is irrelevant to this state of being.” In the end, who we are inside matters more than what others think of us.

Michael Dirda in a Washington Post review of Plato at the Googleplex by Rebecca Newberger Goldstein

How to Spot a Liar

Can you spot a liar? Is averting the eyes a sign? Perhaps nervous behavior like a sweaty appearance? How about rapid blinking? Researchers will tell you the answer is no, no, and no. There are no telltale nonverbal signs of guilt. Not shifting posture or pausing. There is a small increase in pitch—but it’s too small for the human ear to detect. Jessica Seigel writes:

Researchers have found little evidence to support this belief despite decades of searching. “One of the problems we face as scholars of lying is that everybody thinks they know how lying works,” says Hartwig, who coauthored a study of nonverbal cues to lying in the Annual Review of Psychology.  

There’s also “no evidence that people were any better at detecting lies told by criminals or wrongly accused suspects in police investigations than those told by laboratory volunteers.” And it doesn’t matter whether the deceit is verbal or nonverbal.

While liars feel more anxious and nervous, those are internal feelings—not observable behavior. 

However, there are some ways to spot what may be evidence of lying:

1.     Contradictions. If a subject is allowed to talk enough, they may reveal discrepancies in their story or their story may contradict known information.

2.     Details. Someone who is telling the truth about an event is more likely to provide details. In one experiment, they provided 76% more detail than those who were being deceptive.

Stephen Goforth

AI Climate Costs

You’d be hard-pressed to ask enough questions to ChatGPT, Perplexity or other AI services to meaningfully change your personal emissions. Asking AI eight simple text questions a day, every day of the year, adds up to less than 0.1 ounces of climate pollution, our data suggests. The exception is AI-generated video: One five-second clip (is) equivalent to riding 38 miles on an e-bike. Overall, our personal and work-related digital emissions are dominated by just three things: TV, digital storage and internet or video use on your computer.  -Washington Post

AI’s risks

"As with the technology fears of the past, AI’s risks—the unintended consequences of autonomous systems, deepfakes, control by rogue actors, et al.—will be real, but for the foreseeable future they will be manageable in the much same way that every important technology has been in the past—through evolving rules, practices, and system refinements. While it’s easy to imagine a dystopia where super intelligent and highly dexterous legions of robots dominate and revolutionize life on Earth, that’s still the realm of science fiction, where technology fears have always found a home." - David Moschella writing for the Information Technology and Innovation Foundation  

Reframe anxiety

Reframe anxiety, not as dread but as evidence of an exciting opportunity. The Harvard Medical School psychiatrist Kevin Majeres has defined anxiety as “adrenaline with a negative frame.” The right objective is not to get rid of the adrenaline, which is a performance-enhancing hormone, but to change the frame. This can be as simple as saying, when something is stressing you out, “This is exciting.” -Arthur C. Brooks writing in The Atlantic

23 Articles about the Dangers of AI

Hey, AI Job Doomers: Wanna Bet? - Policy Arena

‘I love you too!’ My family’s creepy, unsettling week with an AI toy – The Guardian

Should We Listen to the A.I. Doomsayers? – New York Times 

Those who predict that superintelligence will destroy humanity serve the same interests as those who believe that it will solve all of our problems – The Atlantic  

AI Is Going to Consume a Lot of Energy. It Can Also Help Us Consume Less. – Wall Street Journal

We did the math on AI’s energy footprint. – MIT Tech Review  

AI’s Emerging Teen-Health Crisis – The Atlantic 

Anthropic CEO on AI: "There's a 25% chance that things go really, really badly" – Axios

Parents, Your Job Has Changed in the A.I. Era – New York Times

The family of teenager who died by suicide alleges OpenAI's ChatGPT is to blame – NBC News

A.I. Is Coming for Culture – The New Yorker

Our AI Fears Run Long and Deep – The Atlantic

ChatGPT Convinced 37-Year-Old Psychologist His Sore Throat Was Fine; Biopsy Revealed Stage 4 Cancer – Mashable  

OpenAI Is Updating ChatGPT to Better Support Users in Mental Distress – Wall Street Journal

AI Experts No Longer Saving for Retirement Because They Assume AI Will Kill Us All by Then – Futurism

What Worries Americans About AI? Politics, Jobs and Friends - Cnet

AI as teleportation – Geoffrey Litt

What happens when fake AI celebrities chat with teens - Washington Post

Complaints about deepfake AI videos more than doubled this year, FBI says. Here are warnings from experts. – CBS News

AI Is Grown, Not Built Nobody knows exactly what an AI will become. That’s very bad. – The Atlantic

AI Is Making Online Dating Even Worse – The Cut

America is in a literacy crisis. Is AI the solution or part of the problem? - CNN

How Americans View AI and Its Impact on People and Society – Pew Research

Insight into who will respond better in a crisis

A person’s capacity for healthy outcomes during difficulties is tied to their ability to define their life’s goals and values apart from the surrounding pressure to conform to a particular viewpoint.

In his book Generation to Generation, Edwin Friedman offers a way to test resistance to togetherness pressures, that is, possessing the power to say “I” when others are demanding “you” and “we.”

When presented with an issue that does not include “should” and “musts” some listeners will respond in a way that better defines themselves (such as “I agree” or “I disagree”). This person is likely to function well (emotionally) during a crisis. Other people may respond by attempting to define the speaker (comments like “How can you say that when…” or “After saying that I wonder if you are really one of us”). This indicates the person will likely resist progress toward healthy outcomes during crises and difficulties. People who more clearly define themselves are also more likely to take personal responsibility, whereas those who focus on the speaker are more likely to blame outside forces for their situations.  

One of the founding fathers of family therapy, Murray Bowen, suggested the capacity to define one’s own life’s goals and values apart from surrounding pressure, that is, to be a “relatively nonanxious presence in the midst of anxious systems” is an indication of taking “maximum responsibility for one’s own destiny and emotional being.” It shows up in “the breadth of one’s repertoire of responses when confronted with crisis.” The concept shouldn’t be confused with narcissism. For Bowen, differentiation means the capacity to be an “I” while remaining connected.

Stephen Goforth

10,000 hours of deliberate practice is not enough

In a study of violin students at a conservatory in Berlin in the 1980s.. there was something that almost everyone has subsequently overlooked. “Deliberate practice,” they observed, “is an effortful activity that can be sustained only for a limited time each day.” Practice too little and you never become world-class. Practice too much, though, and you increase the odds of being struck down by injury, draining yourself mentally, or burning out. To succeed, students must “avoid exhaustion” and “limit practice to an amount from which they can completely recover on a daily or weekly basis.” 

Everybody speed-reads through the discussion of sleep and leisure and argues about the 10,000 hours (necessary to become world-class in anything).

This illustrates a blind spot that scientists, scholars, and almost all of us share: a tendency to focus on focused work, to assume that the road to greater creativity is paved by life hacks, propped up by eccentric habits, or smoothed by Adderall or LSD. Those who research world-class performance focus only on what students do in the gym or track or practice room. Everybody focuses on the most obvious, measurable forms of work and tries to make those more effective and more productive. They don’t ask whether there are other ways to improve performance, and improve your life.

This is how we’ve come to believe that world-class performance comes after 10,000 hours of practice. But that’s wrong. It comes after 10,000 hours of deliberate practice, 12,500 hours of deliberate rest, and 30,000 hours of sleep.

Alex Soojung-Kim Pang writing in Nautilus

Advice for college students dealing with an AI future

Major in a subject that offers enduring, transferable skills. Believe it or not, that could be the liberal arts. It’s actually quite risky to go to school to learn a trade or a particular skill, because you don’t know what the future holds. You need to try to think about acquiring a skill set that’s going to be future-proof and last you for 45 years of working life. Of course, when faced with enormous uncertainty, many young people take the opposite approach and pursue something with a sure path to immediate employment. The question of the day is how many of those paths AI will soon foreclose. -The Atlantic